Using bhyve. If I boot the FreeBSD rescue image by itself, no issues; if I boot the same image, but also add an extra image (a ZFS filesystem) as a second device, this happens:
I am unsure if the ZFS errors are a red herring, and there's actually some other issue preventing it from showing the console, but I'm still confused why there's ZFS errors at all, implying the system is loading ZFS?
There is no zfs_enable="YES" in /boot/loader.conf, or /etc/rc.conf
I also tried bootenv_autolist="NO" in /boot/loader.conf - no change
If I boot rescue without the second device, zfs does not appear, which is the expected behaviour:
I do want ZFS support, and the second image is a valid zroot system, however I don't want the system to boot from it - that's the point of the rescue image...
Is bhyve doing something "smart" like changing device priority if there's a ZFS filesystem present?
I cannot see any zfs related switches in /usr/share/examples/bhyve/vmrun.sh, or the man page for bhyve.
Any ideas? Thanks.
---
EDIT: Per-boot manual workaround to force booting the kernel from a partition on the first device:
...but I'd prefer a permanent fix (if possible). In addition, I believe this break is only happening because the zpool is corrupt; otherwise, it will just boot from the ZFS device without asking? Not what I want.
Code:
# sh /usr/share/examples/bhyve/vmrun.sh -c 2 -m 8G -t tap99 -d RESCUE.img -d boot.img rescue
Launching virtual machine "rescue" ...
Consoles: userboot
FreeBSD/amd64 User boot lua, Revision 3.0
zio_read error: 5
zio_read error: 5
zio_read error: 5
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
ERROR: cannot open /boot/lua/loader.lua: invalid argument.
Type '?' for a list of commands, 'help' for more detailed help.
OK
I am unsure if the ZFS errors are a red herring, and there's actually some other issue preventing it from showing the console, but I'm still confused why there's ZFS errors at all, implying the system is loading ZFS?
There is no zfs_enable="YES" in /boot/loader.conf, or /etc/rc.conf
I also tried bootenv_autolist="NO" in /boot/loader.conf - no change
If I boot rescue without the second device, zfs does not appear, which is the expected behaviour:
Code:
root@rescue:~ # kldstat
Id Refs Address Size Name
1 3 0xffffffff80200000 1f41500 kernel
2 1 0xffffffff82142000 4440 speaker.ko
I do want ZFS support, and the second image is a valid zroot system, however I don't want the system to boot from it - that's the point of the rescue image...
Is bhyve doing something "smart" like changing device priority if there's a ZFS filesystem present?
I cannot see any zfs related switches in /usr/share/examples/bhyve/vmrun.sh, or the man page for bhyve.
Any ideas? Thanks.
---
EDIT: Per-boot manual workaround to force booting the kernel from a partition on the first device:
Code:
set currdev=disk0p2 # UFS partition may vary depending on your setup
boot
...but I'd prefer a permanent fix (if possible). In addition, I believe this break is only happening because the zpool is corrupt; otherwise, it will just boot from the ZFS device without asking? Not what I want.