Recommended RAID controller for FreeBSD 11 (4 drives - 2TB)

Hi everyone -

We're working on a setup that requires a RAID controller to setup 4x 2TB SATA drives in a RAID 5 configuration. This will need to be compatible with FreeBSD 11 as well.

The controller we had on hand unfortunately only has drivers up to FreeBSD 9 so we're stuck having to purchase a new one. This doesn't have to be a top of the line one, looking for a low/medium priced one.

Any recommendations? I was working off of the supported controller list, but finding a lot no longer are around (and even tough to find used ones which we're open to).

Thanks
 
If you can afford one get an LSI based controller. Pretty much all of them have good support (with a few notable exceptions) on FreeBSD and provide excellent performance.

mfi(4), mrsas(4), mps(4) and mpt(4).
 
I recommend (and personally use on my home server) LSI SAS 9210-8i flashed in IT (HBA) mode and ZFS RAIDZ. Branded versions or 9211 or any version without HBA mode can be cross-flashed with vendor firmware to IT, just make sure it doesn't have onboard memory. You can find such controller for 40 bucks, ebay has steady supply of them.
 
A: Second the recommendation for LSI (=Avago =Broadcom) hardware.
B: Second the recommendation for not doing hardware RAID, but using ZFS for software RAID. To quote a former colleague who has many dozens of years of file system experience: "RAID is too important to leave it to controller people".
 
A. Second the recommendation for a Broadcom controller (LSI 9210-8i or 9300-8i are great HBA's).
B. Second the recommendation for not doing hardware RAID, but using ZFS for software RAID.
C. I don't think RAID 5 is relevant anymore, when using 4 disks, RAID 10 is the way to go.
Way better performance and parity at a slightly higher cost.
 
Hi everyone -

We're working on a setup that requires a RAID controller to setup 4x 2TB SATA drives in a RAID 5 configuration. This will need to be compatible with FreeBSD 11 as well.

This looks like a troll post to me. RAID 5 (neither hardware nor software RAIDZ1 included) should not be used in production period. On the top of it RAID is typically in industry used for high availability. Note that RAID is not a backup. High availability is not typically needed for home users. I am having hard time imagining who would need 4 disk RAID 5 in industry these days. Even my small academic lab have decommissioned last RAID 5 more than 10 years ago.

Harware RAID is not cheap. Four 2TB HDDs are going to cost about $240 in U.S. A new hardware RAID controller is about $700. Plus a party should be ready to monitor and replace RAID battery which is about $100. All that being said I like Areca and LSI controllers. At work I have used mostly LSI with Red Hat and XFS as a file system. Honestly if you are going to go with hardware RAID that is probably the route you should take unless you want to be adventurous and try using DragonFly and HAMMER. If file integrity, snapshots, and build in backup are not needed Hardware RAID wins hands down over ZFS. At this point FreeBSD is almost synonymous with ZFS-only storage OS (after the Solaris eclipse). I would keep that in mind.
 
If file integrity, snapshots, and build in backup are not needed Hardware RAID wins hands down over ZFS.

You’re trolling here, right? If you’re not interested in file integrity, just use tmpfs. ;)

Seriously, hardware RAID is never even considered in my work anymore. One possible exception is at the extremes of IO performance requirements — like benchmarking and trying to squeeze the last few percentage points out of a system. Even there, it’s iffy.

Hardware RAID ties you to the vendor (at a minimum) and possibly the card generation; ZFS gains new features over time (actively being worked on; new checksum options just became available, for example, as well as large blocks not too long ago, etc...) and has the advantage that the posix layer down through the redundancy and drive IO is managed by one system, as opposed to a file system laying on top of a block storage system. Due to this, ZFS can tell you which file was impacted if there ever is a checksum failure without redundancy, for example.

On top of data integrity (job 1 of any file system), snapshots, send/recv, compression, zvols ... it’s not even close.

Get an LSI HBA, and run ZFS. You’ll be glad you did.
 
You’re trolling here, right? If you’re not interested in file integrity, just use tmpfs. ;)
I have being sarcastic but in all seriousness anybody who is using hardware or software RAID on Linux doesn't truly care about file integrity. You would be surprised how many people fall into that group.



Seriously, hardware RAID is never even considered in my work anymore. One possible exception is at the extremes of IO performance requirements — like benchmarking and trying to squeeze the last few percentage points out of a system. Even there, it’s iffy.

Hardware RAID ties you to the vendor (at a minimum) and possibly the card generation; ZFS gains new features over time (actively being worked on; new checksum options just became available, for example, as well as large blocks not too long ago, etc...) and has the advantage that the posix layer down through the redundancy and drive IO is managed by one system, as opposed to a file system laying on top of a block storage system. Due to this, ZFS can tell you which file was impacted if there ever is a checksum failure without redundancy, for example.

On top of data integrity (job 1 of any file system), snapshots, send/recv, compression, zvols ... it’s not even close.

Get an LSI HBA, and run ZFS. You’ll be glad you did.

I like really like Hardware RAID as a volume manager but the real question is what you put on the top of it. The only modern file system suitable for hardware RAID is HAMMER1. I use HAMMER1 at home but at work lack of critical DragonFly developer mass made me decommission our only DF machine about 1.5 years ago. HAMMER2 preliminary release is immanent (DF 5.0) and we will see if the project is going to get more traction. I just added another HDD on my home file server to test HAMMER2.
 
Isn't it depend on load? raidz1 has better writes (parity is not bottleneck with modern CPU and write parallelism is 3 vs 2 for RAID10).

You have a good point, in terms of sequential throughput RAID 5 will be slightly faster.
However when you're talking IOPS, RAID 10 is superior.
An SSD doesnt differentiate itself from a HDD by sequential throughput IMO.
The "real world" performance between an SSD and HDD comes from their difference in IOPS and latency.
Real world performance will be better with RAID 10 in comparison to RAID 5.
 
This looks like a troll post to me. RAID 5 (neither hardware nor software RAIDZ1 included) should not be used in production period. On the top of it RAID is typically in industry used for high availability. Note that RAID is not a backup. High availability is not typically needed for home users. I am having hard time imagining who would need 4 disk RAID 5 in industry these days. Even my small academic lab have decommissioned last RAID 5 more than 10 years ago.
I use 4x4TB drives in RAIDZ1 in my home server.
 
You shouldn't if you care about you data. With 4 HDDs you should use mirroring vdev(s).
Why? Because there's an inherent risk when replacing a bad drive the whole pool might die? Sure there's a small risk. But that's also why you have backups.
 
For just four drives, I suggest zfs on top of the mainboard's SATA ports. Both mirrored vdevs of two drives each and RAIDZ1 would work better than any hardware RAID controller. If you can, add another drive and run RAIDZ2 for peace of mind. Once you use ZFS, any non-RAID SAS or SATA HBA will do the trick if you need more drives.

Using hardware RAID5 on FreeBSD is like buying an expensive racing bicycle and then drive it with training wheels attached. :D
 
There are various statements in this post that are wrong, yet contain a grain of truth:
RAID 5 (neither hardware nor software RAIDZ1 included) should not be used in production period.
What is true: With modern very large drives, and the uncorrectable error rate of modern drives, any RAID code that is only single-fault tolerant (which includes RAID 5, ZFS RAIDZ1 and simple mirroring) is in and of itself insufficient, if you want better than two nines of data reliability. That doesn't mean that it shouldn't be used in production, but that for systems that do require higher reliability, it needs to be used together with other measures. But not everything needs such high standards; for quite a few uses, single-fault tolerant RAID (and even non-RAIDed disks) can be used for production.

On the top of it RAID is typically in industry used for high availability.
And for high data reliability. A system that is up and running, but return EIO when trying to access certain files is available, but not reliable. Another very simplified way to look at it is: there are two ways for disks to fail. One is for a disk to completely stop responding (electronics failure); here RAID help maintain high availability of the system as a whole. The other is for a disk to be functional, but return read errors for a certain sector; here RAID helps maintain high reliability of the one file that is stored on the offending sector.

High availability is not typically needed for home users.
Have you ever had your home server down when your teenage child needs to access the web to finish his homework, and your spouse needs to print some documents for the meeting they're about to drive to? While home users typically don't need the 5 or 7 nines of availability that can be needed in commercial settings (not in all, by the way), an availability of about 3-4 nines makes life at home much easier.

I am having hard time imagining who would need 4 disk RAID 5 in industry these days.
It is still used all the time. For example because a certain system wants to use local storage (perhaps the network can't handle full-speed accesses to a storage server), and the data is small enough to fit on 3 disk's worth of space. In that case a combination of 4-disk RAID 5 and good backup (or snapshots) with asynchronous moving of the backup/snapshot data off host gives you decent QoS with little network load at a very good cost.

Four 2TB HDDs are going to cost about $240 in U.S.
Only if you use consumer-grade hard disks. Enterprise grade hard-disks (even high capacity near-line drives), which have much better MTBF, tend to run a few hundred $ each.

A new hardware RAID controller is about $700.
In many cases, they are de-facto free, built into the motherboard (that's true of the better server motherboards); all you need to enable the RAID functionality is to buy the battery. And for add-on cards, a LSI (BroadCom) 4-port internal RAID card (the 9266-4i) can be had at NewEgg for $299. Not that I particularly endorse NewEgg or LSI, but that sets a price-point.

Plus a party should be ready to monitor and replace RAID battery which is about $100.
True. But you also need to monitor your disk drives, which have failure rates comparable to the RAID battery.

If file integrity, snapshots, and build in backup are not needed Hardware RAID wins hands down over ZFS.
The debate between hardware RAID and software RAID (built into the file system, not into a separate RAID layer below the file system) is complex. There is not one correct answer for all cases. It depends on many factors.

The only modern file system suitable for hardware RAID is HAMMER1.
I think that statement is ridiculous. Many other file systems function very well on hardware RAID. Please explain why only Hammer can do it.
 
Only if you use consumer-grade hard disks. Enterprise grade hard-disks (even high capacity near-line drives), which have much better MTBF, tend to run a few hundred $ each.
This is the first time I hear anybody referring to WD Reds or Seagate IronWolfs as consumer-grade hard disks (I prefer WD Reds for the record). My estimate is really on the low side but last time I checked Amazon 2TB Reds were going for about $80 and IronWolfs $10 cheaper. So yes I was a bit optimistic with $240 but $320 will definitely get you 4x2TB WD Reds.

In many cases, they are de-facto free, built into the motherboard (that's true of the better server motherboards); all you need to enable the RAID functionality is to buy the battery. And for add-on cards, a LSI (BroadCom) 4-port internal RAID card (the 9266-4i) can be had at NewEgg for $299. Not that I particularly endorse NewEgg or LSI, but that sets a price-point.
I am surprise that a person who refers to WD Reds as consumer grade hard drives refers to a built in "fake" hardware RAID many modern motherboards come with as a true hardware RAID. I just checked and you are quite right. I was surprised to see that 9266-4i goes for $299. Last time I bought a hardware RAID card (I think it was 9266-8i four years ago) my vendor charged me $700 (obviously 3 years support and such cost money but it looks like the prices have gone down quite a bit). I am also not disputing that you can pick up a used Areca or LSI hardware RAID on Ebay for a close to $100 if you are lucky.




The debate between hardware RAID and software RAID (built into the file system, not into a separate RAID layer below the file system) is complex. There is not one correct answer for all cases. It depends on many factors.
I couldn't agree more with you on this one. To be perfectly honest I don't think I have ever resolved my own internal Hardware vs Software raid debate. I started switching all my file servers to ZFS about 4 years ago (about the same time I bought the last Hardware Raid card for $700) but I have nothing but good things to say about LSI products I have used over the years and to be honest nothing bad to say about XFS (I also used UFS on the top of one of those cards in one of the servers for a year and it was OK as well) which I used on the top of hardware RAID cards.

I think that statement is ridiculous. Many other file systems function very well on hardware RAID. Please explain why only Hammer can do it.
Well some fools like me believe COW, check-summing, snapshots, self-healing, compression, built in backup, and God forbid fine grained history should be the standard on any modern file system deployed in 21 century. To my knowledge HAMMER1 is the only pure file system which provides most if not all that. I am glad to learn you are not one of those fools. I hope you just don't fall for the fake snapshots on XFS Red Hat is working on now when BTRFS is officially buried.

Code:
dfly# mount
ROOT on / (hammer, noatime, local)
devfs on /dev (devfs, nosymfollow, local)
/dev/serno/B620550018.s1a on /boot (ufs, local)
/pfs/@@-1:00001 on /var (null, local)
/pfs/@@-1:00002 on /tmp (null, local)
/pfs/@@-1:00003 on /home (null, local)
/pfs/@@-1:00004 on /usr/obj (null, local)
/pfs/@@-1:00005 on /var/crash (null, local)
/pfs/@@-1:00006 on /var/tmp (null, local)
procfs on /proc (procfs, local)
DATA on /data (hammer, noatime, local)
BACKUP on /backup (hammer, noatime, local)
/data/pfs/@@-1:00001 on /data/backups (null, local)
/data/pfs/@@-1:00002 on /data/nfs (null, NFS exported, local)
 
This is the first time I hear anybody referring to WD Reds or Seagate IronWolfs as consumer-grade hard disks (I prefer WD Reds for the record).
WD Red: MTBF = 1M hours, workload = 180 TB/year, uncorrectable error rate = 10^-14, 3 year warranty.
WD Gold: MTBF = 2M hours, workload = 550 TB/year, uncorrectable error rate = 10^-15, 5 year warranty.

The numbers for Seagate are similar, I'm too lazy to look them up this morning (and these are all specifications, we know that the real world ignores specifications). In particular look at the factor of 10 difference in error rate and the factor of 2 in overall survivability, which is then reflected in the warranty. That's the visible part of the iceberg that enterprise-class drives are.

I am surprise that a person who refers to WD Reds as consumer grade hard drives refers to a built in "fake" hardware RAID many modern motherboards come with as a true hardware RAID.
Server class motherboards (the ones that come in commercial-grade servers from companies like Lenovo and HP) often have real RAID controllers built in, often using the same LSI (=Broadcom) chips that are the LSI cards. I used to have lots of Lenovo m3650 m5 servers, which have that, and we used that to good effect (the boot disk was already mirrored, even though it contained no user data, only the OS and installed software).

I agree that consumer-grade motherboards often have fake "hardware" RAID (which is really just a little hook for loading a Windows/Linux driver which then performs software RAID under the cover). That's pretty much junk, and I wouldn't trust it.

I hope you just don't fall for the fake snapshots on XFS Red Hat is working on now when BTRFS is officially buried.
Honestly, I don't know very much about XFS these days. I use it on Linux installations as the file system for the root drive (without any fancy features); it might have snapshots, but I don't happen to care. It might be really good, it might not be, haven't had time to (or the need to) investigate. I don't think I have stored user data on it in at least a decade.
 
  • Thanks
Reactions: Oko
WD Red: MTBF = 1M hours, workload = 180 TB/year, uncorrectable error rate = 10^-14, 3 year warranty.
WD Gold: MTBF = 2M hours, workload = 550 TB/year, uncorrectable error rate = 10^-15, 5 year warranty.

The numbers for Seagate are similar, I'm too lazy to look them up this morning (and these are all specifications, we know that the real world ignores specifications). In particular look at the factor of 10 difference in error rate and the factor of 2 in overall survivability, which is then reflected in the warranty. That's the visible part of the iceberg that enterprise-class drives are.
That is a super useful info. I work in academic setting not in the data center so my budget might not allow me to go with more expensive drives than what I currently use.


Server class motherboards (the ones that come in commercial-grade servers from companies like Lenovo and HP) often have real RAID controllers built in, often using the same LSI (=Broadcom) chips that are the LSI cards. I used to have lots of Lenovo m3650 m5 servers, which have that, and we used that to good effect (the boot disk was already mirrored, even though it contained no user data, only the OS and installed software).
Again very useful info. Our university (Carnegie Mellon) has a contract with Dell but I typically buy Supermicro from a third party vendor. I have yet to come across motherboard which has built in honest to God Hardware RAID.

Honestly, I don't know very much about XFS these days. I use it on Linux installations as the file system for the root drive (without any fancy features); it might have snapshots, but I don't happen to care. It might be really good, it might not be, haven't had time to (or the need to) investigate. I don't think I have stored user data on it in at least a decade.
So what file system if not XFS are you using on Linux to store users' data? Red Hat defaults to XFS for everything. Ubutnu uses Ext2 for /root IIRC and home can be /ext4 but ext4 didn't support volumes larger than 16TB until recently. So XFS is the only choice on the top of Harware RAID on Linux.

On FreeBSD sure enough UFS on the top of the hardware RAID works like a charm.
 
That is a super useful info. I work in academic setting not in the data center so my budget might not allow me to go with more expensive drives than what I currently use.
Absolutely, the cost per TB of enterprise-class drives is several times higher than consumer-class drives. In many cases, the cost of the raw drives is a small fraction of the lifetime cost of a storage system, so using enterprise-class drives for enterprise-grade availability and reliability requirements tends to be a good investment. But I completely understand that there are environments where the cost is prohibitive. In that case, you need to relax your requirements (live with higher downtime and more data loss), or use stronger RAID codes (most systems today offer 2- or 3-fault tolerant RAID), or do some creative budgeting. For example, in some cases you can use more labor expenses (to have more administration overhead, more dealing with failures, and build the systems yourself), because in some settings manpower is "free" (meaning it comes from a different budget).

Again very useful info. Our university (Carnegie Mellon) has a contract with Dell but I typically buy Supermicro from a third party vendor. I have yet to come across motherboard which has built in honest to God Hardware RAID.
I know nothing about Dell servers, not having seen one in the flesh in at least a decade. They may have really good RAID controllers as an option on some motherboards, they may not. But on the servers I'm used to, real RAID is definitely present (and a good thing).

So what file system if not XFS are you using on Linux to store users' data?
At home, I use ZFS, including for RAID. Until about 6 years ago, I used OpenBSD FFS on top of hardware RAID, with a very careful backup regime. At work: Until very recently I worked for companies that build storage systems and file systems, and my job was creating those file systems, so the answer is: the file system we wrote and sold (I'm currently taking the summer off from work, mostly to do home projects and yard work, which is why I have more time to play around on FreeBSD forums). A web search will get you the answer quickly, but there is no reason to post it here: this forum is not the right place for comments about non-FreeBSD commercial software.
 
At home, I use ZFS, including for RAID. Until about 6 years ago, I used OpenBSD FFS on top of hardware RAID, with a very careful backup regime. At work: Until very recently I worked for companies that build storage systems and file systems, and my job was creating those file systems, so the answer is: the file system we wrote and sold (I'm currently taking the summer off from work, mostly to do home projects and yard work, which is why I have more time to play around on FreeBSD forums). A web search will get you the answer quickly, but there is no reason to post it here: this forum is not the right place for comments about non-FreeBSD commercial software.
Former OpenBSD (at least when it come to storage) guy. Well me to (OpenBSD user for over 12 years) except that I use at home HAMMER and at work ZFS to store data:) Don't tell me that you are putting ZFS on the top of hardware RAID :) We teach kids in this country not to do that :) I used to share the office with the guy who work for Penasis. Yes I know you guys have better stuff. We just don't have money to pay for it. Let us not get into discussion what you guys use under the hood ;)
 
It's spelled Panasas; used to work there. Really nice people, very smart. Just to be clear: The fact that I left that company reflects badly on the (highway-) traffic and gridlock in the area where I live; when they moved to a different building, it became virtually impossible to go to work there, because it is on the other side of the giant traffic jam that's called "San Jose", in particular the area around the airport (which nobody has ever come out alive from, at around 9am or 5pm).

No, I don't put ZFS on top of hardware RAID; instead I let ZFS do the RAID (my current system at home only 2-way mirrored, which is good enough with my very strict backup regime). I've lost two disk drives in my server at home over the years, and ZFS handled it without any malfunction (on the part of ZFS; the disk drives and the BIOS were a different story). Actually, if you think about it: I've been using ZFS on FreeBSD since May 2012 now (just looked up my administration logs), and I've had about an hour of downtime due to file system and disk problems: the first dead disk needed over half hour for me to get the system back up, since the bad disk prevented the motherboard from even running or booting; the second dead disk was really easy, and I had to take the server down and physically open it up twice to put new disks in, say 15 minutes each. That's 5 nines! Pretty good for open source software, with software RAID, on commodity hardware (I have a micro-ATX motherboard in a Chinese case), administered by a part-time amateur. I think the key is using enterprise-class drives, a UPS, and really good open source software (that is: FreeBSD and its parts).
 
Back
Top