ZFS ZFS on a hardware raid controller

Hi all. For over 2y I have a FreeBSD 13 server with 4 drives configured on ZFS raid10. The controller is the HBA from the main board. Everything works as expected, no issues so far.

I need to expand the storage capacity and the only I card I got is a lsi9260i (IBM m5015 in fact). As you may know this card doesn't support IT mode. All the topics related to this is 10y old. Everybody recommend to get and older lsi 9240 / ibm m1015 and flash it to it mode. I don't want that. I just want to use what I got, the lsi 9260.

My question is as follow. I have 2 large disks I want to mirror to backup some data.

What would you recommend, raid0 on each drive from controller and then mirror with ZFS or mirror on the controller and just use the ZFS filesystem?

Thanks
 
The card might support configuring the disks as JBOD. That would be preferred.
 
my choose wil be

Never ever raid0, UNLESS on sacrificable volumes (aka: for test restoring the backup)
In this case even 4x striped zfs for more bandwith on cheap SSDs

In every case zfs mirror, not raidz unless volume size needed is too little

Short version: zfs mirror as much as you can
 
If the controller doesn't support JBOD mode I wouldn't use it.

Many controllers that do not can be flashed to support it, though.
 
It's a bit of a mix when it comes to LSI RAID cards, at least on the ones I've seen. Some have an option that enables all connected drives as JBOD, some can configure individual drives as JBOD, and some don't have that JBOD option at all.
 
It's a bit of a mix when it comes to LSI RAID cards, at least on the ones I've seen. Some have an option that enables all connected drives as JBOD, some can configure individual drives as JBOD, and some don't have that JBOD option at all.
In the bios interface of the card I have no such option, that's for sure.

On BSD if use megacli to query raid level supported I got the following
Code:
RAID Level Supported             : RAID0, RAID1, RAID5, RAID00, RAID10, RAID50, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span

if I use mfiutil show adapter, I get the following:

Code:
     RAID Levels: JBOD, RAID0, RAID1, RAID5, RAID10, RAID50

if I create a jbod using the command

Code:
mfiutil create jbod -v 64

mfiutil show config
mfi0 Configuration: 1 arrays, 1 volumes, 0 spares
    array 0 of 1 drives:
        drive 64 (  838G) ONLINE <SEAGATE ST900MM0026 0001 serial=S0N02LZB> SCSI-6
    volume mfid0 (837G) RAID-0 64K OPTIMAL spans:
        array 0


so, it actually creates a raid0 array. So is not working.

So... since I will gonna use 2 drives in mirror... I guess the easiest option is to make the array on the controler and use only the ZFS filesystem.

I also found in a box an old 1064e controller that is 3G... the new drives I wanna use are capable of 12G... Anyway... on raid1 with 2 drives the max transfer speed I can achieve is around 200mb/s... I guess the 3G controller will do the job temporarly.
 
On the other hand... what is the best practice if someone decide to use a hardware raid controller on freebsd?
 
On the other hand... what is the best practice if someone decide to use a hardware raid controller on freebsd?

Not any different from other OSes and hardware raid. If the controller has battery backup for a cache it can be beneficial in some usage patterns, but that doesn't make up for the disadvantages of proprietary raid formats. Or exposing yourself to the raid hole that ZFS closes.
 
On the other hand... what is the best practice if someone decide to use a hardware raid controller on freebsd?
zfs is just like a "giant hardware raid controller", with a very powerful CPU and a lots of RAM
The "best" is... do not "decide" so
The "fair" is... not a big deal, maybe you can try with RAID-1, but do not forget to backup

zfs is not some kind of voodoo, it is just a filesystem
 
If you have a hardware RAID controller. And it has battery-backed RAM for its write cache (which is VITAL for performance of small updates). And you really trust it to function correctly, even in situations like crashes and power failures, and faulty disks (which is, after all, what RAID is all about). Then use it. It won't be as good as ZFS, although it might actually have better performance for some workloads (and worse performance for others).

zfs is not some kind of voodoo, it is just a filesystem
No, there is some magic. The magic is that ZFS integrates the RAID layer with the file system. Which seriously improves the performance, safety, reliability, of a RAID system. If you think about how file systems write data, and you integrate those write patterns into the RAID layer, you can come up with better solutions. In particular solutions that make small writes and in-place updates not be a safety and performance issue, meaning without opening the "write hole" that requires either performance-killing techniques or extra hardware (such as SSDs or battery-backed RAM).

ZFS is not the only file system that gets benefits from integrating itself with the RAID layer. But it is the only one of those that's available as FOSS (free software).
 
I have been using hardware raid controllers for years on linux and windows servers. Never got any problem. Rule number one is to always have an identical raid card as a backup. Rule number 2, when the disk start to report errors or to correct errors, change it ASAP.

At my home i got a freebsd server setup as a file server and some iocages for testing various stuffs. Since i was reading so much nice things about zfs and what can do... i said to give it a try. Now i got some files i need to backup from another backup and i just need to create a simple raid mirror for redundancy...the disks are SAS and the mainboard HBA used for ZFS is only SATA.

i'm ok using a hardware raid configured as a raid1 (i got a backup card; remember rule no1) and zfs as a filesystem since my files will not be corrupted in time.
 
Zfs is not voodoo means: you cannot trust zfs as something that magically does not put you in any trouble

I use and used zfs from the very first release, on solaris machine too

Hw controllers or any other fs on servers are ancient history,for me

THE "thing" is resilvering by checksums
When a HW controller detect a difference from the two raid1 throw a panic and halt the system
Because cannot know where the good data is (of course if the drives seems to work. Usually hdd starts with some relocated sectors, then smart errors, then die. Not always, but quite often for 24/365)

I get this behayvior a LOT of times
Sometimes a drive simply die and detach
Sometimes not


Zfs happily continue to run,ignoring the faulting drive or even faulted

You change the drive, then scrub

No pci controller can do the same

The all thing is: I can check (on zfs) integrity
On pci I cannot. I must trust

I do not like to trust and even less "pray"
 
fcorbelli,

makes sense what you are saying there. Another thing that i like zfs instead of hw controllers is that if the HBA dies, i can get any other HBA and will work the same as before.

Regarding the HBA's. What would you recommend? I know that if is not mission critial and i don't have more than 8disks any HBA will do the job. But i like to be informed :)

Have a great day!
 
Ahemm... onboard sata for internal backup and system drive
Always in mirror
Raid0 up to 4 drives for temp backup verify volume (2 for boot)
Or 2 boot, 2 temp, 2 spinning backup drives

Pci-e card adapters for nvmes
Yes, zfs does mirror nvmes too,and I use this kind of drives for data from 2017

Not mounted on the mb, even if some socket present

Too hard to change in case of a failure

Maybe a bit less bandwith, but a huge pain in the ass saved

Beware of pci lines (usually not a problem with xeon's machine)
 
On the other hand... what is the best practice if someone decide to use a hardware raid controller on freebsd?

Well.. maybe to late..
I use a loot of HW-raid controllers on HP servers (from G8 to G10) there I use the controller and not the IT mode (even is the card support it). I my case, HP is.. well.. not as friendly.. that's why.

I did try to setup many raid0 (as you said) and then take all of them in ZFS, but the performance was not there. Also disk led did some crazy stuff sometimes (also in IT mode). I lose some benefits that zfs offer, I have to take that. :/

So, you question.
I would configure a mirror on the card and don’t mix 2xraid0 to zfs.
(with that said, I haven't tried a lot with many raid0 in zfs)

The performance is also good in HW-raid. Here is a (ugly disk test) on a HP DL360 G9 with a P440 and 8x500GB Samsung EVO 870 in a Raid10. OS 13.1-RELEASE-p2 running bhyve(8):
Code:
#!/bin/sh
time sh -c "dd if=/dev/zero of=tempfile999 bs=10000k count=1k && sync"

# ./speedtest.sh
1024+0 records in
1024+0 records out
10485760000 bytes transferred in 3.610524 secs (2904220952 bytes/sec)
3.64 real 0.00 user 3.64 sys


And on a VM (13.1-RELEASE-p3) on the bhyve server:
# ./speedtest.sh
1024+0 records in
1024+0 records out
10485760000 bytes transferred in 5.935198 secs (1766707613 bytes/sec)


The server running a lot of active VMs, the performance is around 3.7 gigabytes/s with no VM running.
 
Yes, on the server:
# ./speedtest.sh
1024+0 records in
1024+0 records out
10485760000 bytes transferred in 47.457687 secs (220949666 bytes/sec)
74.23 real 0.00 user 47.18 sys


And on the VM:
# ./speedtest.sh
1024+0 records in
1024+0 records out
10485760000 bytes transferred in 96.523977 secs (108633733 bytes/sec)
101.93 real 0.01 user 72.39 sys
 
Nice :)
Now you know why you should not test the file system. Instead the disk test must be performed on block level with data which can't be compressed by the disk firmware or the controller.
 
Back
Top