ZFS [SOLVED] ZFS: i/o error - all block copies unavailable.

My automated installer configures 3 sas drives in a zfs mirror. 2 drives are active the other is a spare.
There is also a nvme drive which I set up as swap so I can see the result of any kernel panics.
I've testing this in a vm over and over with no issues.

On first boot on the real hardware I am presented with:

1674230183686.png

I then booted up the install cd and went into the shell.
I can see all the gpart info
1674230447482.png

And I can see the pool.
1674230493018.png


I imported the pool, removed the spare and added it as another drive in the mirror.
I also rewrote the bootcode to all 3 sas drives and rebooted.

Sadly I'm still getting the same error.

I have been doing the same thing on the same server but with only 2 sas drives in a zfs mirror. The nvme drive is present but not configured. Boots fine.

The system is a DELL PowerEdge R440 with a HBA330 controller.
 

Attachments

  • 1674230438904.png
    1674230438904.png
    52.8 KB · Views: 64
The auto installer is based on 12.4-RELEASE. The live cd is the same version. I did not export the pool after importing it via the live CD.

As a sanity check I'm going to install FreeBSD on the machine with a 3 drive mirror.
 
The auto installer is based on 12.4-RELEASE. The live cd is the same version. I did not export the pool after importing it via the live CD.

As a sanity check I'm going to install FreeBSD on the machine with a 3 drive mirror.

While you have it imported on the livecd, try zpool set cachefile=<altroot>/boot/zfs/zpool.cache (import the pool with an altroot location while on the livecd). Export the pool and reboot.
 
Another identical server, apart from it only has 2 SAS drives not 3, is configured as follows by the auto installer and boots just fine.

Code:
root@myserver:~ # cat /boot/loader.conf
vfs.root.mountfrom="zfs:nsgroot"
zfs_load="YES"
root@myserver:~ # df
Filesystem                                                              1K-blocks     Used     Avail Capacity  Mounted on
nsgroot                                                                1070245740 71288100 998957640     7%    /

Interestingly I have just done an install from the live cd with a zfs mirror of the three sas drives and it fails to boot with the same error!
 
Success!

I did a 12.4 install from the install media on to a 3 disk mirror but with GPT (BIOS + UEFI) rather than the default GPT(BIOS) boot. Then changed the bios to UEFI boot and the FreeBSD install booted up.

No idea why that would make it work.
 
when you use UEFI the loader is loaded from a fat partition
what I speculate is the zfs code in the loader is more complete that the one in the zfsboot code and has better understanding of the zpool structure
also the in the EFI case it uses efi api to access the disks, otherwise the normal bios api.
hard to tell without more debugging
 
when you use UEFI the loader is loaded from a fat partition
what I speculate is the zfs code in the loader is more complete that the one in the zfsboot code and has better understanding of the zpool structure
also the in the EFI case it uses efi api to access the disks, otherwise the normal bios api.
hard to tell without more debugging
Just strange that it works fine using bios boot in a qemu virtual machine.

Code:
qemu-system-x86_64 -smp 8 -m 4096M --enable-kvm -rtc clock=host -vnc 192.168.1.21:5 -boot menu=on\
    -net nic,model=virtio -net user,hostfwd=tcp::2223-:22 \
    -device virtio-scsi-pci,id=scsi0,num_queues=4 \
    -device scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0 \
    -drive file=disk1.qcow2,if=none,id=drive0 \
    -device scsi-hd,drive=drive1,bus=scsi0.0,channel=0,scsi-id=0,lun=1 \
    -drive file=disk2.qcow2,if=none,id=drive1 \
    -device scsi-hd,drive=drive2,bus=scsi0.0,channel=0,scsi-id=0,lun=2 \
    -drive file=disk3.qcow2,if=none,id=drive2 \
    -device nvme,drive=nvme0,serial=deadbeaf1,num_queues=4 \
    -drive file=nvme1.qcow2,if=none,id=nvme0

Anyhow. I have a solution of sorts.
 
Back
Top