ZFS ZFS on hardware RAID

Hello,
I used Centos for 10 years on a particular server. Now as Centos 8 gone, I would like to switch to FreeBSD instead of Linux (no particular reason, I might just use Debian though, but I like FreeBSD).

Now the doubt is here. I have hardware RAID controller there, LSI MegaRAID. 6 slots. My storage configuration is:
2 SSDs in RAID-1 array for /
2 SSDs in RAID-1 array for /var/lib/mysql
2 HDDs in RAID-1 array for /backup

Which filesystem to use with FreeBSD? I'm leaning toward ZFS; have some experience with it on Proxmox, but the problem here is hardware RAID controller. Can ZFS work reliably with HW-RAID? If yes, then how can I recreate storage schema like above example? I still need to use the same HW-RAID configuration like described, which gives me devices /dev/sda, /dev/sdb and /dev/sdc. What now? Should I create one separate ZFS pool on each /dev/sdX? Does that makes any sense, or I should avoid that and use UFS instead and go old school way?

Thank you
Ivan
 
I think, it's better to provide separate disks to ZFS and use it for mirroring. Like, switching LSI to IT mode, use passthrough, or, at last, make raid0 of 1 disk.
 
absolutely, use JBOD on your controller, avoid hardware RAID configuration if you want to take full advantage of ZFS (strong emphasize)!
 
I need to see if JBOD mode is possible on this controller. But I never thought about idea to try raid0 with 1 disk if JBOD not possible.
OK thx, I got it. Either JBOD or raid0-s with single drive in each and ZFS, or, raid-1 and UFS...
All the best.
Ivan
 
Also, think about what happens (or needs to happen) when one (or more) of the drives fails. After all, one of the main reasons for running zfs is creating a pool / pools that allows you to replace a failed drive without data loss (or a restore from backup operation). So you better know that you can create / recreate whatever you need in that hw raid controller bios / configuration program.
 
Hi, this is my setup


Bye
 
IMHO, while ZFS was clearly designed for managing multiple physical disks, and providing advantages over hardware RAID systems, you still get a ton of advantages by using ZFS even when on a single physical or virtual disk. Even in cases where I'm working with a hardware RAID, with VMware on top, I will still then create a single virtual disk and format it with ZFS. You get snapshots, clones, easy replication, flexible mount points, boot environments, deduplication, virtually unlimited expansion, etc.
 
IMHO, while ZFS was clearly designed for managing multiple physical disks, and providing advantages over hardware RAID systems, you still get a ton of advantages by using ZFS even when on a single physical or virtual disk. Even in cases where I'm working with a hardware RAID, with VMware on top, I will still then create a single virtual disk and format it with ZFS. You get snapshots, clones, easy replication, flexible mount points, boot environments, deduplication, virtually unlimited expansion, etc.
And checksums.
 
Hello all,
Very useful replies there, you're all amazing, thank you. I certanly now have more arguments to evaluate than I thought of before.

i would use UFS for MySQL and stay with hardware RAID1.
Would you please state the reason to avoid ZFS with MYSQL? Thank you!
 
"ZFS vs. UFS after 5 min. warm-up. Higher scores show better performance." - most scores are higher for ZFS.
 
Performance is one thing. But the real cool thing is Copy-on-Write. CoW solves a lot of database consistency problems of which you don't even want to think.

The original reason I switched to ZFS was that: it can be as slow as it wants, because it makes my database backup ten times smaller. I do continuous redo-log backup (with postgres), and when you have CoW, you don't need to backup the full database blocks of 8kB for consistency, only the rows that actually changed. The case that of an 8k block some sectors were written before the crash and others were not, that just cannot happen.
 
ZFS will work just fine on the single block devices provided by the RAID controller.

As others have said, the important thing is how you notify yourself of disk breakages and how you manage disk replacements. If you are already comfortable with the support software for your controller that would be fine. Otherwise I would let FreeBSD do the mirrors.

Whether ZFS is a great choice for mysql I don't know.
 
Thank you all again. This is a summary.

I decided to go with UFS and keep hardware RAID mirroring with the same scheme I used before (separated virtual drives for OS/boot, mysql and backup). The main reason for a decision is my level of confidence that is still on lower side for working with CoW filesystems. The controller does support JBOD, but I was a bit of anxious about having ZFS on boot device (would probably not know how to recover if system fails to boot), and was not sure about space usage overhead with ZFS. I believe this is not the thing in mirror configuration, only with zraid1, 2,... volumes, but did not want to discover out those things later in operational state. The big deciding factor is that server is colocated several hours away from me, without having remote support hands there so I must physically drive there is system fails to boot for example. If the server is physically accessible to me all the time, I would probably configure JBOD and ZFS.

Also, to answer managing disk replacements, etc... I can manage all aspects of RAID controller using mfiutil, and also smartctl see physical disks through controller without issue.

Greetings to all!
Ivan
 
  • Thanks
Reactions: PMc
I was a bit of anxious about having ZFS on boot device
Me also. I don't do it yet. I don't want to see the boot fail and then recognise that the recover boot does not have the correct zfs features and cannot write it, etc.etc.

So I am usually doing this one - the "bf" volumes are the ones from ZFS:

Code:
$ df
Filesystem           1K-blocks      Used     Avail Capacity  Mounted on
/dev/da1s1a            3044988    345280   2456112    12%    /
/dev/da1s1d            4053308    675736   3053308    18%    /usr
/dev/da1s1e            1015324      8600    925500     1%    /var
bf/var/spool         322520868       104 322520764     0%    /var/spool
bf/var/log           322554036     33272 322520764     0%    /var/log
bf/var/db            322574988     54224 322520764     0%    /var/db
bf/usr/local         322950004    429240 322520764     0%    /usr/local
bf/var/named         322522804      2040 322520764     0%    /var/named
...

But this one needs a bit of handicraft to set it up, and I wouldn't want to do that when I do not have full console or IPMI at hand...
 
I think you will not gain any benefits of using ZFS for the MySQL database as MySQL already have good memory management and indexing. So ZFS will only waste memory which can be given to the MySQL.

Or, let ZFS handle the caching, along with the benefits of compression. My ARC is currently 92GB compressed/198GB uncompressed (1:2.15), but I've seen better ratios than that.

The innodb bin log sits on Optane SSD (with immediate sync), the ZFS intent log is also on Optane SSD (immediate sync), and then the main db sits on spinning rust (with more relaxed "standard" sync). MySQL gets about 14GB of the 128GB physical RAM for itself, and most of the remaining RAM is used for ZFS caching.

ARC hit ratio is 99.37%, L2ARC hit ratio is 41.12%

You do need to make some changes to the MySQL config, for example to disable buffered double writes (which ZFS does itself with the intent log).
 
What is the biggest single table that you have in MySQL?
For me is 18.5 TB as today (MariaDB in fact)

Turning back to the question: I would never use a UFS system on FreeBSD.
The reason for using it falls directly, at which point Debian is better

With zfs the whole computer becomes a sort of "supercontroller", there is not the slightest comparison with hardware RAID systems

In some cases I still use Solaris today, whose CIFS support (not for domains) is unmatched

About the zfs boot (for maintenance), I use this https://mfsbsd.vx.sk/
~100MB, very quick to deploy to remote servers
 
The main reason for a decision is my level of confidence that is still on lower side for working with CoW filesystems.
this is understandable, however, learning ZFS is really really worthwile.

... was not sure about space usage overhead with ZFS.
ZFS saves space by compression

The big deciding factor is that server is colocated several hours away from me, without having remote support hands there so I must physically drive there is system fails to boot for example.
this is actually a factor _against_ using UFS ... at least in my personal experience in 20 years of using FreeBSD personally and professionally.

Also to consider against using a RAID controller configuration: if your controller card burns you probably need the same controller card to restore data - the metadata that the controller card writes onto the disks is of no use for other controllers - even contrary it often forces you to stick to a specific controller so the disks with data on them cannot be used elsewhere. If your RAID controller card burns after years of service and is not available again good luck finding one ... if you stick to JBOD you can just plug in any other RAID controller card and use your zpool the same way as before. Furthermore, error correcting codes on ZFS are much more advanced to that used on conventional RAID controllers.
 
Also to consider against using a RAID controller configuration: if your controller card burns you probably need the same controller card to restore data - the metadata that the controller card writes onto the disks is of no use for other controllers - even contrary it often forces you to stick to a specific controller so the disks with data on them cannot be used elsewhere. If your RAID controller card burns after years of service and is not available again good luck finding one ... if you stick to JBOD you can just plug in any other RAID controller card and use your zpool the same way as before. Furthermore, error correcting codes on ZFS are much more advanced to that used on conventional RAID controllers.
Thank you, but when I replaced old drives with larger, I forgot to backup everything over network and consenquently later needed some stuff from removed disks. No problem; I sticked one of them pulled from RAID-1 array, to the USB enclosure (they are SATA) and read it just fine on my desktop computer. So for this argument I don't see as a valid one.
 
Just one question
Why use FreeBSD if you do not want zfs?
Go to Debian, instead (if you really want the HW controller)
I go on the orthogonal side: I use FreeBSD just because runs zfs (and gets more support than Solaris)
If/when a robust zfs (as BSD 11) will run on Debian... good bye BSD

UFS vs ZFS is just like pistol vs ICBM
And, almost always, no tuning at all is needed
A part of limiting the ARC size, everything is just about plug and play in almost all cases
Runs on nvme too

Of course no raidz, only mirror to reduce any strange behayviour
 
Actually ZFS under Linux and FreeBSD is the same since FreeBSD version 13. FreeBSD dumped its previous implementation and switched over to OpenZFS in that major release. So different kernels and OS, but mostly the same code base under the hood.
 
Actually ZFS under Linux and FreeBSD is the same since FreeBSD version 13. FreeBSD dumped its previous implementation and switched over to OpenZFS in that major release. So different kernels and OS, but mostly the same code base under the hood.
That's exactly because I write "BSD 11"
BSD 13 and Debian (/whatever zfs) are not, in my opinion, safe to use.
Too many glitch, too many strange things
BSD 12 is no better than 11 (in this field)
In very important situations, for this reason, I have Solaris machines
 
Back
Top