16 port HBA

I got tired of waiting for FreeBSD support for the Areca ARC-1300ix-16 or its future 6gbps replacement.

Does anyone know if the LSI SAS 9201-16i is supported? Probably a long shot since it isn't listed on the LSI website or in the FreeBSD 8.1 hardware list but the hardware list does show the LSI 9260 so I thought I'd check.

Otherwise, does anyone else have any suggestions for a 12 port or higher HBA? I'd rather not spend 2 or 3 times as much on a full blown RAID card since I'm not going to use the RAID. It's for 12 SATA drives but I'd like good performance, which my current 4 on motherboard + 8 on a Highpoint card isn't delivering.
 
Hi,

Not sure about the card, sorry.
About your config, so you have 12 single SATA disks each attached to 12 separate SATA buses? And the performance is poor? Do you think that a new SATA card will make any difference? What performance issues are you experiencing? Seems unlikely that with 12 SATA buses that the SATA interface with whatever card is going to pose much of a bottleneck to 12 SATA disks. Have you considered other components that may be causing you bottlenecks such as the disks themselves or the motherboard PCI bandwidth etc?

thanks Andy.
 
I have replaced most of the components and the drives themselves get very good speeds when used individually.

It's a Highpoint RocketRaid 2680 card, which isn't a great card in the first place (but it is cheap and can handle 8 drives). In addition, because I was adding 4 drives at a time to my ZFS pool with mini-SAS to 4xSATA cables, the first 4 were on the motherboard, the next 4 were on one port of the 2680 and the final 4 were on the last port of the 2680, so it was never really using all of them at once. Buying new drives as the old ones filled up (up to maximum chassis capacity of 12) meant that it was only ever really using 4 drives at once, and all of that data was going through 1 cable and 1 port on the RAID card at any one time.

I'm also stuck on an old version of FreeBSD (with a slow, old version of ZFS) because that card isn't supported in newer versions, so I need a new card anyway.


I'd rather just get answers on this card or possible alternatives and not go off-topic into how I happen to have done things.
 
Fair enough. Just didnt really buy the idea that you were maxing out SATA buses with only one drive attached (IO or MB/sec). Ie trying to help you not waste time and money. However if you need a new card in any case, go for it!

I dont know what performance you require, but I have some systems using Sil3124 cards and disks attached via port multiplier. Each card has 3x 3Gb/sec eSATA giving a total bandwidth of 12Gb/sec (I have multiple PCI slots also to allow for multiple cards in future).

Andy.
 
BTW you comment that you think the fact you added the disks 4 at a time, and there fore each of these groups is using the same bus/cable is affecting your performance. With ZFS you should be able to export your ZFS pool, move your disks around as you see fit and then reimport the disks. ZFS doesnt care what device name the disks have, it will work out which disk is which by scanning metadata on each device.
 
This post here may be of interest to you here.

This helped me ;)

I've been looking at going 8-12 ports. Adaptec seems to be well supported and their products have good prices. I will be looking at one for my system.
 
ghell said:
Does anyone know if the LSI SAS 9201-16i is supported? Probably a long shot since it isn't listed on the LSI website or in the FreeBSD 8.1 hardware list but the hardware list does show the LSI 9260 so I thought I'd check.

This device is supported by mps(4) driver. But this driver currently is only in FreeBSD 9.0, and i don't know how stable is it.
 
shitson said:
This post here may be of interest to you here.

This helped me ;)

I've been looking at going 8-12 ports. Adaptec seems to be well supported and their products have good prices. I will be looking at one for my system.

Thanks but that's my own thread from a while ago ;) I was hoping to get some fresh answers by asking a different question in a new thread rather than adding a reply to that old one, and it seems to have helped.

Adaptec don't seem to do that many HBAs - they have a 4 internal port (Adaptec 1405) but that's all I could see. The Adaptec RAID 51245 has 12 internal ports but is much more expensive than a HBA (almost double the price).

butcher said:
This device is supported by mps(4) driver. But this driver currently is only in FreeBSD 9.0, and i don't know how stable is it.

Thanks! Exactly what I wanted to know.

I've heard good things about 9.0 but don't really want to upgrade my ZFS to a version I can't get it back from. I'll try to be patient, but at least there is a light at the end of the tunnel and I can always go for it if I get impatient.
 
ghell said:
I've heard good things about 9.0 but don't really want to upgrade my ZFS to a version I can't get it back from. I'll try to be patient, but at least there is a light at the end of the tunnel and I can always go for it if I get impatient.

I think you can just copy needed for mps(4) files from head/ branch to stable/8 and try compile it as module or integrate into kernel. It is not so hard to do.
 
You can also upgrade to 9-CURRENT without upgrading the ZFS pool version. The pool isn't upgraded until you manually run # zpool upgrade and # zfs upgrade
 
The performance difference between older and 8-stable (with it's v15 ZFS pool) is significant. Upgrading an old 7-stable system to 8-stable improved performance dramatically.

If your motherboard SATA controller supports port multipliers, this might be another way to expand and not depend on (usually slower) external HBA.
 
Port multipliers work ok, if you use a 6 Gbps adapter connected to 7200 RPM SATA drives.

Remember, a port multiplier shares a single channel with the 1-4 drives it's connected to. This is okay if you just want lots of drives, but really sucks if you want performance. Especially if you only have 3 Gbps controllers.

Granted, today's SATA harddrives can't saturate a 3 Gbps channel, but SAS and SSD are pretty damned close. :)

Discrete channels for each drive is the best solution for raw throughput.
 
Back
Top