ZFS zraid3 with LSI MegaRAID SAS 9264-8i

I planned to purchase a 9211-8i IT for a current project, but then was given a 9264-8i free of charge, so I thought I give that a try.

When I turn the computer on, this is how the adapter is shown up amongst the P.O.S.T. messages.

Code:
LSI MegaRAID SAS-MFI BIOS
Version 3.09.00 (Build August 27, 2009)
Copyright (c) 2009 LSI Corporation

HA -0 (Bus 2 Dev 0) LSI MegaRAID SAS 9264-8i

FW package: 12.0.1-0102
...irrelevant garbage here about devices missing from previous configuration...
Press <Ctrl><H> for WebBIOS _

Pressing Ctrl+H there did not have any visible effect.
I have also seen a "Press C to configure", and following that a "press Y for yes" confirmation, but those did not bring up the expected configuration interface either.

After an mfiutil clear command under FreeBSD, the irrelevant garbage was displayed no more. Nor was the "Press C to configure" message ever seen again.


The relevant part of a pciconf -lv output is this.
Code:
mfi0@pci0:2:0:0:    class=0x010400 rev=0x05 hdr=0x00 vendor=0x1000 device=0x0079 subvendor=0x1000 subdevice=0x9264
    vendor     = 'Broadcom / LSI'
    device     = 'MegaRAID SAS 2108 [Liberator]'
    class      = mass storage
    subclass   = RAID

And then playing around with the adapter and a pair of 1TB SATA SSDs.
Code:
root@ede800g1:~ # freebsd-version -ku
13.1-RELEASE-p2
13.1-RELEASE-p2

root@ede800g1:~ # mfiutil version
mfiutil version 1.0.15

root@ede800g1:~ # mfiutil show adapter
mfi0 Adapter:
    Product Name: LSI MegaRAID SAS 9264-8i
   Serial Number: SV20119474
        Firmware: 12.0.1-0102
     RAID Levels: JBOD, RAID0, RAID1, RAID5, RAID6, RAID10, RAID50
  Battery Backup: not present
           NVRAM: 32K
  Onboard Memory: 256M
  Minimum Stripe: 8K
  Maximum Stripe: 1M

root@ede800g1:~ # mfiutil show battery
mfi0: No battery present

root@ede800g1:~ # mfiutil show firmware
mfi0 Firmware Package Version: 12.0.1-0102
mfi0 Firmware Images:
Name  Version            Date         Time      Status
BIOS  3.09.00                                   active
APP   2.0.83-1327        Jul 28 2011  14:33:20  active
PCLI  02.00-015:#%00008  Oct 30 2009  13:31:45  active
BCON  3.0-22-e_12-Rel    Nov 20 2009  16:36:07  active
NVDT  2.02.0043          Jun 09 2010  13:32:09  active
BTBL  2.00.00.00-0018    Apr 17 2009  13:09:17  active
BOOT  01.250.04.219      4/28/2009    12:51:38  active

root@ede800g1:~ # mfiutil show drives
mfi0 Physical Drives:
 2 (  932G) ONLINE <1TB 1c70 serial=1248> SATA E1:S2
 5 (  932G) ONLINE <1TB 1c70 serial=1727> SATA E1:S5

root@ede800g1:~ # mfiutil create jbod E1:S2
root@ede800g1:~ # mfiutil create jbod E1:S5

root@ede800g1:~ # mfiutil show volumes
mfi0 Volumes:
  Id     Size    Level   Stripe  State   Cache   Name
 mfid0 (  931G) RAID-0      64K OPTIMAL Writes  
 mfid1 (  931G) RAID-0      64K OPTIMAL Writes  

root@ede800g1:~ # mfiutil show config
mfi0 Configuration: 2 arrays, 2 volumes, 0 spares
    array 0 of 1 drives:
        drive  2 (  932G) ONLINE <1TB 1c70 serial=1248> SATA
    array 1 of 1 drives:
        drive  5 (  932G) ONLINE <1TB 1c70 serial=1727> SATA
    volume mfid0 (931G) RAID-0 64K OPTIMAL spans:
        array 0
    volume mfid1 (931G) RAID-0 64K OPTIMAL spans:
        array 1

After that, the relevant portion from my DMESG is below.
Code:
mfi0: <LSI MegaSAS Gen2> port 0xe000-0xe0ff mem 0xf7c80000-0xf7c83fff,0xf7c40000-0xf7c7ffff irq 16 at device 0.0 on pci2
mfi0: Using MSI
mfi0: Megaraid SAS driver Ver 4.23 
mfi0: FW MaxCmds = 1008, limiting to 128
mfi0: 8760 (718759164s/0x0020/info) - Shutdown command received from host
mfi0: 8761 (boot + 3s/0x0020/info) - Firmware initialization started (PCI ID 0079/1000/9264/1000)
mfi0: 8762 (boot + 3s/0x0020/info) - Firmware version 2.0.83-1327
mfi0: 8763 (boot + 11s/0x0020/info) - Board Revision 62A
mfi0: 8764 (boot + 29s/0x0002/info) - Inserted: PD 02(e0xff/s2)
ehci1: <Intel Lynx Point USB 2.0 controller USB-A> mem 0xf7d3b000-0xf7d3b3ff irq 23 at device 29.0 on pci0
mfi0: 8765 (boot + 29s/0x0002/info) - Inserted: PD 02(e0xff/s2) Info: enclPd=ffff, scsiType=0, portMap=00, sasAddr=4433221101000000,0000000000000000
usbus2: EHCI version 1.0
mfi0: 8766 (boot + 29s/0x0002/info) - Inserted: PD 05(e0xff/s5)
mfi0: 8767 (boot + 29s/0x0002/info) - Inserted: PD 05(e0xff/s5) Info: enclPd=ffff, scsiType=0, portMap=01, sasAddr=4433221105000000,0000000000000000
mfi0: 8768 (boot + 29s/0x0001/info) - Policy change on VD 00/0 to [ID=00,dcp=21,ccp=20,ap=0,dc=0,dbgi=0] from [ID=00,dcp=21,ccp=21,ap=0,dc=0,dbgi=0]
mfi0: 8769 (boot + 29s/0x0001/info) - Policy change on VD 01/1 to [ID=01,dcp=21,ccp=20,ap=0,dc=0,dbgi=0] from [ID=01,dcp=21,ccp=21,ap=0,dc=0,dbgi=0]
mfi0: 8770 (718827537s/0x0020/info) - Time established as 10/11/22 18:18:57; (36 seconds since power on)
...
mfid0 on mfi0
mfid0: 953344MB (1952448512 sectors) RAID volume (no label) is optimal
GEOM_PART: integrity check failed (mfid0, MBR)
mfid1 on mfi0
mfid1: 953344MB (1952448512 sectors) RAID volume (no label) is optimal
GEOM_PART: integrity check failed (mfid1, MBR)

So, it appears that I could get it to work without any difficulty. Although, it seems a bit cumbersome to use the utility tool of the RAID adapter to tell it to create a virtual drive of a single phisical drive in order for the Operating System to see it, enabling me to create a zfs RAID out of it.

My plan was to use 6 or 8 noname cheap chinese SSDs of 1TB capacity to construct a zraid2 or zraid3, and use that as a small office file storage. I generally don't expect these noname SSDs to be fast or very reliable. But I hope to get a decent performance and reliability out of the constructed zraid.

Do you have any experience with this LSI MegaRAID SAS 9264-8i or a similar mfi(4) adapter?
Is this all right for my intended use? Or should I avoid this kind of zraid construction and go for a 9211-8i with IT firmware instead? How do these two compare?

I also looked for the possibility of a firmware upgrade, but got confused. The Broadcom downloads page listed all kinds of legacy firmware, without showing WHICH adapter they belong to. Often I found more than one download with the same version number, same release date, obviously for different adapters, but the only way to figure out the name/type of the adapter was in the file name, after I downloaded them.
Also, I found no trace of a 9264 adapter, but I found downloads for 9260, 9261, and 9265.
Any clarification on those is also very welcome.
 
I'm not a professional.

I think the 9264-8i card is not suitable because you can't seem to use it in IT mode.

I think it's a mistake what you are doing, you need to put your HBA controller in IT mode so that the disks can be read without being behind a RAID, that they have been visible in Freebsd as if they were a disk so that you can then create a zraid .

Zfs needs to have direct access to the disks, it's not a good idea to put them behind a raid, that's why to put the HBA in IT mode.

It is my explanation, I remind you that I am not a professional. I recommend using a Perc, in my case I have a PERC H310 mini.

Here is a list, plus the firmware to pass the HBA to IT mode, in case it is not:

Perc

On the other hand, I think that using components of dubious quality is a mistake, I am referring to Chinese SSDs.

If you want to study ZFS, you don't need to buy hardware that way.

In my case, when I'm at work and I have some free time, I have a virtual machine with FreeBSD, and I have some raidz and mirrors created with md(4)

You can create small virtual disks in memory of 50MB and create a raidz and from there you can study ZFS in a more limited way, but it is a solution before buying cheap hardware.

You can also create files with truncate(1) or dd(1) and create a raidz.

Regards.
 
Although, it seems a bit cumbersome to use the utility tool of the RAID adapter to tell it to create a virtual drive of a single phisical drive in order for the Operating System to see it, enabling me to create a zfs RAID out of it.
Don't. Switch on JBOD instead. The adapter should support it:
Code:
RAID Levels: JBOD, RAID0, RAID1, RAID5, RAID6, RAID10, RAID50
 
I've had LSI cards that had a 'global' JBOD switch, that switched all attached disks to JBOD. I've also had LSI controllers that needed every individual drive to be marked as JBOD (these controllers allowed you to mix hardware RAID volumes and JBOD disks on the same controller).
 
But the JBOD disks, would not be a possible source of problems?

For example if a drive fails, how could it be recovered with ZFS, wouldn't that be too much trouble?
 
But the JBOD disks, would not be a possible source of problems?

For example if a drive fails, how could it be recovered with ZFS, wouldn't that be too much trouble?
The controller exposes the disks as JBOD. This has nothing to do with ZFS. It simply means the controller exposes each individual disk to the OS. You configure ZFS to use those JBOD disks in a RAID-Z{1,2,3} configuration.
 
I understand, then I could directly do:

zpool create test mirror /dev/mfid0 /dev/mfid1

Yes, as I could read, Oracle also recommends using JBOD-Mode before RAID and using ZFS to manage storage and redundancy.

https://docs.oracle.com/cd/E23823_01/html/819-5461/zfspools-4.html

Great, but then and referring to the comment of Alain De Vos The "simplest" way would not be to have the SAS in IT mode so that they are recognized directly as da(4) disks, and that ZFS has maximum control? That would be the best way to run ZFS on any scheme?

But in the case of Keve, his controller can't run in IT mode the best way would be to use JBOD, is that the point?

I'll stay here to see how the thread evolves.

Thanks.
 
IT mode stands for "initiator target". It presents each drive individually to the host.

JBOD stands for "just a bunch of drives". It presents each drive individually to the host.

So same thing. And also what ZFS wants.
 
With the firmware currently in this card, I do not think I can switch it into IT mode. It might be possible to re-flash the card with a firmware that drives it in IT mode, but I was not able to find such a firmware (or any firmware update for the 9264-8i card). And I do not know if the firmware of a 9200, 9260, 9261 ... can be flashed into this card without bricking it.

I was also unable to bring up any P.O.S.T. time RAID configuration interface. Although I did see messages suggesting its existence (see details in my original post).
The only way I found to interact with this card was the mfiutil() command. I red its manual page, and used commands that seemed logical.
I believe I DID instruct the controller to hand my disks over to the OS in JBOD mode. The only way that I could find in its manual page for that was the create jbod command.
Code:
root@ede800g1:~ # mfiutil create jbod E1:S2
root@ede800g1:~ # mfiutil create jbod E1:S5
This resulted in both drives showing up as individual disks mfid0 and mfid1. Although mfiutil prefers to show these as single disk RAID-0 sets instead of displaying JBOD there. I believe these disks are as much JBOD as they can be under this card.
Code:
root@ede800g1:~ # mfiutil show config
mfi0 Configuration: 2 arrays, 2 volumes, 0 spares
    array 0 of 1 drives:
        drive  2 (  932G) ONLINE <1TB 1c70 serial=1248> SATA
    array 1 of 1 drives:
        drive  5 (  932G) ONLINE <1TB 1c70 serial=1727> SATA
    volume mfid0 (931G) RAID-0 64K OPTIMAL spans:
        array 0
    volume mfid1 (931G) RAID-0 64K OPTIMAL spans:
        array 1

If you know a different/better way to designate the disks as JBOD, let me know how.

On the other hand, I think that using components of dubious quality is a mistake, I am referring to Chinese SSDs.
And I believe that this is exactly what RAID was designed for. ;-)
 
Back
Top