The hardware is capable of that. I've seen similar x86 machines run a file servers (from SAS-attached local file system to a network) at 5 GB/s, while running a parity-based software RAID stack and a file system. That was older hardware a few years ago; today's hardware is probably somewhat faster.
BUT: To do that, you need to look at all your bottlenecks. Let's start with the disks and JBODs. How are they connected? SAS links to the host? How many SAS cables? 6 GBit SAS? What type of HBA? What is the PCIe bandwidth (in most cases today, SAS HBAs are limited by their PCIe slots, only with PCIe gen 3 this is beginning to be balanced).
Next question: What type of Intel CPU are you using, and what is the memory bandwidth? Remember, your data will have to go in and out of memory several times, so memory bandwidth will be extremely important. I'm not an expert on CPUs and memory.
Then you need some IO device. It seems to me that the best results are accomplished with Infiniband cards these days; if using Ethernet, you need to make sure that your protocol can use RDMA. Or else your CPU is going to spend much of its effort playing stupid networking games.
But the elephant in the room is the software stack. I have no idea whether this can be accomplished with FreeBSD, ZFS, and whatever file server you are intending to use (Samba? NFS server? WebDAV?).
From a performance point of view, there is no good formula. There are so many bottlenecks, raw CPU performance will be the least of your problems. With really good software and expert tuning, your system as specified could probably do 10x more than the 1 GB/s you want, so it is likely that it will just work, even with a less-than perfect software stack.
With 240 disks, you will obviously need some really solid RAID solution. Remember, the expected lifetime of disks is such that you expect a disk failure every few weeks or months (1M hours specified MTBF = ~120 years, with 240 disks you expect roughly 2 failures a year, the reality is probably 3x or 10x worse). Furthermore, with such large disks and today's error rates, you can unfortunately expect that many resilvering operations will detect a read error when resilvering. If you want to store data sets this large with good reliability, you really need a RAID code that can handle two faults (the common case being complete failure of one drive, and then a read error when resilvering). And given that there will be multiple failures per year, you probably want automated ways of handling failures, and of contacting field service for drive replacement. The question of RAID and automated RAID management is probably way more work than performance tuning.