As already mentioned, for a few disks it's much simpler to just go with the on-board controller. Most motherboards support AHCI mode these days as well which means you get hot-swap support if your case has a hot-swap capable backplane.
For more disks, LSI are generally preferred at the moment for 2 main reasons:
- They are one of few manufacturers that actually provide proper HBA firmware, rather than a full RAID controller with JBOD/Single mode.
- The current driver in FreeBSD is provided and supported by LSI
--
IIRC ZFS RAID-Z3 which should be supported by FreeBSD has fault tolerance to three failed disks which is better than RAID 6 (hardware or sofware) fault tolerance of two (I think FreeBSD doesn't have support for software RAID 6)
RAID-Z3 creates 3 pieces of parity, RAID-Z2 has 2, making RAID-Z2 effectively the ZFS equivalent of RAID6. It makes no real sense to say one is 'better' than the other. RAID-Z3 should be slower (as it's computing parity 3 times for each stripe) but if you're using large disks in a big pool you may want the extra redundancy. It's up to the user to decide which raid level meets their needs.
ZFS doesn't like hardware controllers but you will actually need them
Don't really know what you're getting at here.
I think that minimal number of disks (people will correct me to set up ZFS RAID-Z3 is something like six
It's 4, but the recommended number for RAID-Z3 is probably 7 or 11. Don't know what relevance posting this really has here though, especially if you're not sure of the correct figure.
Due to the fact that FreeBSD doesn't have hot swap daemon to have complete functionality you have to have RAID card on but release control to ZFS
All the 'hot swap daemon' does is automate the process of doing a
zpool replace
. FreeBSD has supported hot-swap hardware for years, even directly on motherboards if they have AHCI, as mentioned above. It just doesn't automatically use ZFS spares if a disk fails. Having a hardware RAID controller makes no difference to the functionality of ZFS on FreeBSD so I don't see why you'd *have* to have one.
You need tons of RAM as in more than 128 GB to do anything serious
The more RAM the better, and if it's a 'serious' storage system then a decent amount of RAM is probably going to be by far one of the cheapest components of the system. Having said that, many people run ZFS fine with 16GB or less. You just might want to manually limit the ARC with little RAM on a system that has multiple roles because ZFS was designed primarily for large scale storage systems and expects that it can use all the system's RAM if it wants.
Unless you have compelling reason to use ZFS, dedicated storage engineer to manage ZFS and your employer to purchase the hardware
ZFS is easier to use than any combination of RAID system and filesystem I've ever used before (so a dedicated storage engineer would be more applicable to traditional RAID systems than ZFS), and for most systems it shouldn't really require any changes to hardware choice.
I do not use ZFS but I did some research
Do you research
Yes, do your research. A lot of what you've put is either incorrect misinformation or makes little sense, and is fairly irrelevant to the original posters question about how best to attach his disks for a ZFS mirror. I wouldn't of hijacked the thread further by replying but I would rather not have users find this post in the future and take incorrect or unclear information as gospel.