Bottlenecks

This can apply to any OS, but since I am spec'ing hardware for a FreeBSD/FreeNAS system, I wanted to get some input from those of you that have been there and know what real world performance is going to be like.

My desire is to use cheap, commodity hardware, and before I jump off and spend a lot of money, I wanted to examine where hardware bottlenecks might arise.

I have a SAS/SATA controller card with 2 SFF-8087 ports that can theoretically handle traffic from 8 drives. And for arguments sake, lets say that each drive could actually get close to saturating a sata2 bus with 3 Gbps ... does it stand to reason that I need a slot on the motherboard rated at 24Gbps or higher for EACH card that I want to add to this system?

If my logic is correct, then each controller card would need to be at least a PCI Express x16 or PCI Express 2.0 x8, right? And I am just not seeing too many of these on the market?

Any thoughts? real world experiences?
 
Hi,

just a quick first thing to consider, if its a NAS system, are you going to have 24Gbps of network bandwidth to the NAS client systems? Your bandwidth to the client systems is probably going to be the slowest thing in your setup...

thanks Andy.
 
Where are you going to find a single harddrive that can saturate a 3 Gbps SATA bus? ;) Even a single SSD drive will have trouble with that (250 MBps sequential reads or writes is still only 2 Gbps).

Unless you are going to be putting a port-multiplier at the end of each SATA link, then the drive is the bottleneck, not the PCIe bus or the controller.

Also, how will you be connecting to it? Via Fast Ethernet? Gigabit Ethernet? 10GE? Unless you are using 10 gigE, the network will be slower than the disk.

Will you be using normal MTU sizes (1500 bytes per packet)? Or jumbo frames (up to 9000 bytes per packet)? Depending on the NIC and switch, jumbo frames should increase your raw throughput.

How many clients connecting simultaneously? For a single client using 10/100 ethernet, the network is the bottleneck. Even if you run multiple clients simultaneously, if the server <--> switch connection is only 1 gigabit, that will be the bottleneck.

So, for a NAS/SAN setup, you need to get the network throughput up over 3 Gbps before worrying about the PCIe lanes or disk controller or even disks. :)

Then you have to look at whether you will be doing mostly reads, mostly write, mostly sequential, or mostly random. That will determine whether to use SSDs or spinning rust, and how many, and whether to add cache (like ZFS uses).

Most generic SATA controllers are PCIe x8, most SATA RAID controllers are x8-x16. Some newer ones are PCIe 2.0. Regardless what you use, though, it's the network that's the biggest bottleneck.
 
exactly the kind of responses I was looking for ... thank you guys both! :beergrin

And yea, I realize the network or hard drives will never touch 24Gbps, but what can I expect from a single drive that says it's a 3Gbps drive? Sure its a theoretical max, but if you have 8 drives tied to a single card, I would think it has the ability to realistically push the limits of a PCIe X4 slot.

I'll tell ya, the machine is slated to be used as an iSCSI target for 60-80 desktops and 45-55 IP cameras spread across 3 or 4 different networks - I'd hate to think one wrong decision could be the bottle neck for the whole setup :r
 
Connected to 3 or more networks? Keep in mind that most WAN connections have a higher latency then a normal LAN.
 
alcor001 said:
exactly the kind of responses I was looking for ... thank you guys both! :beergrin

And yea, I realize the network or hard drives will never touch 24Gbps, but what can I expect from a single drive that says it's a 3Gbps drive? Sure its a theoretical max, but if you have 8 drives tied to a single card, I would think it has the ability to realistically push the limits of a PCIe X4 slot.

I'll tell ya, the machine is slated to be used as an iSCSI target for 60-80 desktops and 45-55 IP cameras spread across 3 or 4 different networks - I'd hate to think one wrong decision could be the bottle neck for the whole setup :r

As per my original comment, bandwidth limitations will most likely be the network. What will your connectivity be? From that you can start to worry about internal buses. If its going to be a really high performance system you need to check how PCI slots are configured on the main board. That is what is shared with what, often onboard network ports share PCI bus bandwidth with onboard disk or PCI expansion slots. Have you got a candidate server in mind?

cheers Andy.
 
I guess that should be stated as "subnets" not networks, but yes, even then, the network will be a limiting factor, I realize this ;)
 
But what will your network connectivity be? 3x 1Gbit Ethernet? If that's the case, then you don't really need to worry much about PCI-E 8x or 16x. Well not unless you have a local disk or tape for backup which transfers need to be faster than that limit...

thanks Andy.

PS off now till after the long weekend! have a good one!
 
As mentioned above, start with the network link. You have ~100 clients, pushing how many data? 10 Mbps each? 100 Mbps each? Is it bursty traffic, or sustained traffic (probably the former for the desktops and the latter for the cameras)? Figure out what your combined, sustained network throughput from all client will be.

Then figure out how you will handle that much traffic. Will you use link aggregation (LACP) with multiple gigabit NICs? Will you spring for a 10 gigabit NIC? Will you use onboard NIC ports, or multi-port PCIe NICs?

Once you have a NIC setup that will handle the traffic, then compare that to the disk subsystem, and how you will combine multiple SATA disks (generally top out at 100 MBps each) to match the network throughput. Most likely, you will need to use a RAID10 setup (whether hardware or software) with lots of disks to meet it.

Once you figure out how many disks you'll need and in what format, then you can look at the PCIe bus and controller needs.

A SATA disk can be rated at 3 Gbps, but that's the max theoretical throughput for the bus. The disk itself can't push that much data. And usually, the max read/write speeds shown for the disk are reading/writing to the cache on the disk, not reading/writing to the disk platters. Which is why the rule-of-thumb is "100 MBps" for a single SATA harddrive.
 
I'd also be sure not to use onboard video if it could be managed (from experience with v9 and compiling ports simultaneously...)
 
Back
Top