I am about to Build a 12TB RAID50 array (8x2TB SATAII). Are there any traps?

Hi Guys,

I am about to upgrade the RAID50 array on my RocketRaid 2320 to 8x2TB SATAII HDDs, this will result in 12TB of space :)

Currently I have 8x500GB SATAIIs in RAID50 which has been working fine.

I spoke to RocketRaid who say that the card will be able to manage it, but I thought I should write in and see if anyone has found any traps that I should account for first?


Thanks for your time.
D
 
Having not had exposure to that RAID controller under FreeBSD, all I'd suggest is that a SATA array of that size should have at least one hot-spare - otherwise you may be exposed to a second disk failure whilst awaiting replacement disks and then rebuilding.

2TB disks will take a long time to rebuild, and reconstruction of a RAID5 array (one half of your stripe set) results in a fairly hefty performance degradation. If it's just a personal array then fine, but if it's a business critical thing, you may need to take that stuff into account.

I know DELL (for example) recommend in the new firmware for their PS series arrays that RAID50 is no longer recommended for disks of size larger than 1TB and other storage vendors hold similar views.

If you don't need the space I'd consider RAID10, and if you don't need the speed I'd consider RAID6.


Or ZFS.
 
Thanks for the tips :) I will look into the better options for 2TB discs.

In response to your points:

I already found that the controller does not support RAID6, which would have been the choice otherwise.

It is a personal server so the performance degradation for a day or two (or three) during a rebuild is no huge stress. If I can avoid then then that's better, but I wouldn't choose it over storage space.

I use RAID50 as it allows one disk from each half to fail and I only lose one disk per RAID5 array. I would rather not use mirroring as it loses far too much space. If I calculate correctly RAID50 will give me 12TB of space while RAID10 will give me 8TB. it is still a huge improvement on the current array but 4TB is a massive loss.


However I will take what you have said on board and do some more researching :)


Thanks again :)
 
Or maybe I should just get a Highpoint 2720SGL.... [edit: bought]

It supports 8Ch RAID6 & 6Gb/s! for $200!!

Considering the opinion is pretty much universally against RAID5, I guess another $200 is better than losing 4TB or space.


Any thoughts against this idea?
 
[Update]

Hi Guys,

The 2720SGL has arrived. As a RAID card it is awesomely cheap, and if you wanted anything other than RAID6 it is perfect... unfortunately the documentation fails to mention that the RAID6 feature is only compatible with Windows!!!

Be warned, do not get this card for RAID6 on FreeBSD or Linux. :(
 
throAU said:
Considered RAIDZ2 with ZFS to get "RAID6"?

I just came back to say that that is what Highpoint have advised and you got there first.

The card does not support RAID6 in the BIOS, only through software. So they have released Windows software to do it, but as FreeBSD has native support in RAIDZ2 they have directed me to use that.

Granted; it is a better solution than third party software. But, I would still have preferred to have RAID6 support on the card, only now do I see why it is so cheap.


Hopefully this story helps someone else :)
 
With software RAID, a failed controller is not a big deal. Hardware RAID, you might need to find the exact same model to get to your data.
 
^^ yup, what he said.

Be wary of hardware based RAID. These days, the cpu power required to do RAID in software is fairly minimal compared to what is available, and you gain portability - with a hardware RAID controller, a RAID controller failure is potentially a big deal.

Glad to hear you've got a plan sorted out anyway, and also glad you're not heading down the RAID50 path.


ZFS will also give you superior rebuild performance to a RAID controller as well, because it is aware of what parts of the disk actually have files on them. If your array is only 50% full, it only needs to rebuild 50% of the disk.

General performance will also be better, as all writes will be "full stripe". Doing a partial stripe write on a regular parity RAID (RAID5/6/50) involves doing a read from all disks in the stripe set first so that parity can be calculated.

A hardware controller will rebuild the entire disk capacity irrespective of whether or not there are used blocks on all of it.
 
Thanks for the tips guys :)

I am about to build the array but I am having trouble confirming if hot spares are supported in FreeBSD. Does anyone know if it can be done yet?
 
I should also mention that the driver Highpoint provide didn't work. However the native hpt272x driver in 8.3 and up works a treat.

/boot/loader.conf
Code:
hpt27xx_load="YES"
 
Hello again,

So far things had been working very nicely, until this morning when I woke up and found the server has crashed and printed the error in the attached photo.

I rebooted and after a short time I got the same error.

I then disabled the zfs settings in /boot/loader.conf & /etc/rc.conf and rebooted but still the same error was printed.

I found one suggestion that I may have too many cards on the board so I tried removing the nic but this did not help either.

Another site suggested it may have something to do with the bus speeds, but I have no idea where to start dealing with that...


After physically removing the controller I no longer get the error.


Can someone please suggest what I need to do to fix this? I hope I didn't need to start a new thread, I figure this is all part of the original topic provided it is not just some stupid user error :|


Thanks for your time
 

Attachments

  • error2.jpg
    error2.jpg
    88.8 KB · Views: 381
Ok, so that was just a stupid user error. I had a few bad sectors on my /usr partition, after ironing these out all is well :)
 
ghostcorps said:
Ok, so that was just a stupid user error. I had a few bad sectors on my /usr partition, after ironing these out all is well :)

User error? How does user error cause bad sectors? Also "a few bad sectors" uh-oh, failing disk?
 
michaelrmgreen said:
User error? How does user error cause bad sectors? Also "a few bad sectors" uh-oh, failing disk?

User error in that it was something I should have checked and fixed as normal procedure rather than go straight to blaming the controller.

I'm pretty sure the errors had to do with the upgrade to 8.3, somewhere somehow something was written badly. After scanning it from the controllers BIOS and an quick fsck and there are no errors.
 
Bad sectors created by user error? Unless you're messing with the drive firmware itself I can't see how that's possible :P I would check the reallocated sector counts and error logs of your drives using the sysutils/smartmontools port to see if there is some hardware error, and do a drive self test if you like.

These kind of events are normally handled nicely using ZFS and a 'dumb' HBA exporting the drives as-is, putting the operating system in charge of handling of timeout and I/O failure events. A nicely incremented read error counter in your zpool status output is usually the result :)
 
Thanks :)

This error is still happening but it has nothing to do with the RAID6/ZFS array and everything to do with the SCSI disk on my Adaptec SCSI RAID card. Being that it appears as DA0 and the error refers to DA0.

Yeh I know another thing I should have seen ages ago :p


I will follow the Sfynx's advice and have a play with sysutils/smartmontools. Hopefully this will fix the issue. Otherwise I'll start a new thread as it has nothing to do with the original question.
 
Back
Top