Need 12 internal Connector SATA RAID card

Hi everyone,

I have searched everywhere for a RAID controller that contains 12 internal SATA connection and support RAID 0,1,0+1,5,6. It can be PCI or PCI-E 8 or 16. Please help as I am desperately need something like this.

I have an i5 computer with 12 HDD used in a single pool using many software RAID application but I always end up with problems so I need a hardware RAID controller that handle those HDD. If there is a solution to use more than card but create a single ARRAY using RAID 6, please share it with me.

Thanks in advance for your help.

Mazen
 
I can't help you on the RAID controller thing, as I wouldn't know if a given RAID card was supported by FreeBSD if I found one.

However, you're also mentioning that you've tried many software RAID solutions to no avail. Seeing as this is your first post on these forums, I have to ask: Have you tried ZFS, and if so, which problems did you encounter? If not, you should probably make a new thread in an appropriate subforum and ask for help on how to configure it. :)
 
Look around for some SAS HBA cards. Some of them will have like 2-4 SAS ports but then you can buy SAS to SATA cables. Each of these cables will give you 4. So for example 4 SAS ports x 4 SATA = 16 HDDs.

They are a bit pricey so you'll need to fork out the $$
 
LSI/3Ware 9550-series (PCI-X) include either 12x/16x discrete SATA ports, or 3x/4x SFF-8087 connectors (multilane connector that you can split into 4 separate SATA connectors).

LSI/3Ware 9650-series (PCIe) come with 3 or 4 multilane connectors that can be split out into SATA.

All of the above are fully supported by FreeBSD, and they include a very nice web-based management GUI and a CLI interface as well.

However, with 12 disks and an i5 CPU (you don't mention RAM), I'd suggest getting LSI 9000-series HBAs (non-RAID SAS/SATA controllers) and using ZFS. They're several hundred dollars less expensive, fully supported by FreeBSD 9.x (not sure about 8.x), and work beautifully with ZFS.
 
We use SuperMicro storage chassis. Currently, we're using 2U versions (24x 2.5" bays), 3U versions (16x 3.5" bays), 4U versions (24x 3.5" bays), and JBOD versions (45x 3.5" bays and no motherboard).

We've also used Chenbro storage chassis in the past, but find the SuperMicro nicer to work with.
 
They're a cheap 120€ LSI2008-based HBA from IBM, available with 8 SAS2 Ports. It works just fine with the IT firmware.
 
phoenix said:
We use SuperMicro storage chassis. Currently, we're using 2U versions (24x 2.5" bays), 3U versions (16x 3.5" bays), 4U versions (24x 3.5" bays), and JBOD versions (45x 3.5" bays and no motherboard).

We've also used Chenbro storage chassis in the past, but find the SuperMicro nicer to work with.

Can you give links for the named cases? How do you use the 45x-case? With an external cable from the storage-server?

Thanks,
Markus
 
So I have one server with HBAs, which are connected as external HBAs to the storage array by using SAS or InfiniBand or something else?

Markus
 
storvi_net said:
So I have one server with HBAs, which are connected as external HBAs to the storage array by using SAS or InfiniBand or something else?

Markus

Correct.

We use an SC216 (2U chassis with 24x 2.5" bays along the front) with an H8DG6-F motherboard (dual-Opteron, tonnes of RAM, using the onboard SAS controller to connect to the onboard backplane for the SSDs). The OS is installed on the SSDs. And there are several LSI 9211-8e SAS controllers (2 external multilane ports per card).

Then we use an SC847E16-JBOD (4U chassis with 45x 3.5" bays along the front and back). There's no motherboard in this, just a power card and SAS backplanes with external SAS connectors on the back. These are connected to the SC216 via external SAS cables.

Thus, the 2U box powers on, boots, powers on the 4U box, and accesses all the disks in the 4U as if they were local. :) Using 2 TB drives, one storage box supports 90 TB of raw storage, and one head node can access 4 storage boxes directly (360 TB raw storage) or 8 storage boxes if you daisy-chain them (720 TB raw storage). :)

That's the current setup for our Zimbra backups box and our off-site replication box for our main backups setup. :)
 
Wow, sometimes I miss the dirty hands (I work now more conceptional and take care about database security). But this seems very nice :)

One last offtopic question: Do you use HAST for redundancy or just trust on redundancy of every component?

Markus
 
We're currently only using these boxes to store backups (rsync from every server in the district every night to one of three backups boxes; ZFS send to the off-site backups box every morning) so high-availability is not an issue.

We played with HAST when it first hit the FreeBSD tree (7.something?) but had issues with it and ZFS at the time. It wasn't worth the hassle for a backups storage box.

Next summer, we're going to be looking at using FreeBSD+ZFS to create a SAN to consolidate the storage for all our VM systems. At that point, we're going to revisit HAST and/or something like it. For shared storage like this, high-availability is a must. :)
 
Back
Top