I've been fighting with an issue with ZFS booting the past week and decided to post this to the forum for some insight. I have a server with a LSI 2108 RAID card (mfi driver) with 15x1TB drives configured for a single RAID6 array (~11TB) in a single bootable ZFS pool. The server was source/buildworld upgraded last week from 9.1-PRERELEASE amd64 to 9.1-STABLE amd64.
When I rebooted the server, I got the following message:
Never a good sign. Long story short, I downloaded the 9.1-RELEASE memstick image and found out the hard way that it has an older ZFS version and wouldn't mount the pool. I made release/memstick on a working booting ZFS box and finally was able to mount the zroot pool. After several attempts, I came up with the following procedure to get the system back up using the newly created memstick release:
System boots and reboots without any issues with the copied /boot. As soon as I reboot the server after a buildworld, I get the ZFS boot error.
Of note is that I have to force mount the pool. It seems the pool is not being cleanly unmounted at reboot. I blew away /usr/src and /usr/obj and am in the progress of building the version that my memstick image is (r247054) to see if I get the same result.
I have done zfs scrubs and RAID consistency checks on the pack with no errors. I have done multiple buildworlds on this box in the past without any issues.
Is there anything that I have overlooked?
ZFS filesystem version:5
features support (5000)
When I rebooted the server, I got the following message:
Code:
FreeBSD/x86 boot
Default: zroot:/boot/kernel/kernel
boot:
ZFS: i/o error - all block copies unavailable
Invalid format
Never a good sign. Long story short, I downloaded the 9.1-RELEASE memstick image and found out the hard way that it has an older ZFS version and wouldn't mount the pool. I made release/memstick on a working booting ZFS box and finally was able to mount the zroot pool. After several attempts, I came up with the following procedure to get the system back up using the newly created memstick release:
Code:
boot usb drive
select Live CD
mount -u /
zpool import -o cachefile=/var/tmp/zpool.cache -f -R /mnt zroot
zfs umount -af
zfs set mountpoint=/ zroot
zfs mount -a
cp -rpv /boot/* /mnt/boot
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid0
cp /var/tmp/zpool.cache /mnt/boot/zfs
zfs umount -af
zfs set mountpoint=legacy zroot
zpool export zroot
shutdown -r now
System boots and reboots without any issues with the copied /boot. As soon as I reboot the server after a buildworld, I get the ZFS boot error.
Of note is that I have to force mount the pool. It seems the pool is not being cleanly unmounted at reboot. I blew away /usr/src and /usr/obj and am in the progress of building the version that my memstick image is (r247054) to see if I get the same result.
I have done zfs scrubs and RAID consistency checks on the pack with no errors. I have done multiple buildworlds on this box in the past without any issues.
Is there anything that I have overlooked?
ZFS filesystem version:5
features support (5000)