I've recently upgraded the FreeBSD installation on a couple of laptops to FreeBSD 12.3-STABLE. This is from a build under the stable/12 branch. This build's uname has the tag:
One of these FreeBSD installations is on an old Toshiba laptop. It's been pretty well usable with the FreeBSD installation, beside the Debian 10 and Windows 7 installations on the laptop. (Debian 10 doesn't seem to work out well with the laptop's legacy Nvidia hardware)
The root filesystem for the FreeBSD installation on this laptop is on a UFS filesystem. It's a multiboot machine, using the Grub bootloader. I've not been able to test out any multiboot configurations with ZFS on root, under a MBR partition table, and Grub.
After cleaning up some files in the Debian installation, then to make use of some of the extra space on the laptop's recently-added SSD, I've created a new ZFS pool on the SSD. This ZFS pool has been used mainly for holding disk images for VirtualBox.
Initially, I was simply storing the VDI files under the ZFS pool. More recently, after some tinkering with ZFS under bhyve and vm-bhyve, I was able to convert one of those VirtualBox VDI files into a disk image on a ZFS volume (sparse, volmode=geom). After converting the VDI file to a non-dynamic VDI file (fixed size) using VBoxManage, then to a raw disk image, using qemu-image, I copied all of the filesystem data into the ZFS volume (sparse, volmode=geom), using ddpt. The disk is accessed in VirtualBox via a VirtualBox vmdk file created with VBoxManage.
I've configured devfs to allow write access to the corresponding /dev/zvol/** geom, for users in the wheel group.
I've enabled skein deduplication on the ZFS filesystem containing the corresponding ZFS geom volume. After some initial filesystem shuffling, it's worked out quite well - up until when I rebooted the machine, this afternoon.
Before rebooting, I'd noticed some messages like the following, in the dmesg output and under /var/log/messages:
With some debug logging configured for devd, here's an example of what was showing up under /var/log/messages:
That much - in the last part of it - was probably from when I was renaming filesytems under zstor/img/vm. At one point in the process of some filesystem-shuffling. I'd moved a set of filesystems under zstor/img/vm to zstor/img/vm.previous and then moved a set of newly re-transferred datasets from zstor/img/vm2 to under zstor/img/vm/. Curiously, the volume snapshots under the vm2 and vm.previous prefixes never showed up in any of that dmesg output,.
I was using BASH in tmux under mate-terminal, so I'd not noticed the kernel messages on the console.
The poweroff for reboot itself got hanged up for a little while, then. Eventually, it finished after all the buffers were sync'd to disk.
I've seen these 'make_dev_p() failed' messages only for snapshots of the volmode=geom volume that I'm now using with VirtualBox. With the main filesystem volume for that datatest recently re-initialized with dedup=skein enabled, and then zfs send/zfs receive to backup and re-write the volume under dedup=skein, I'd booted to the newly re-written VirtualBox virtual disk and it worked out quite well.
Of course, at one point, it took around 24 hours to transfer 64 GB of data for that, after some initial gigabytes of ZFS filesystem data had been transferred in a matter of minutes. I wasn't able to debug it by a lot, though. Eventually, the data transferred and I could move on with the rest of this configuration.
After rebooting - ultimately rebooting into single user mode - then when trying to run 'zpool list,' I noticed that the 'zpool list' cmd was hanging. Here's an example, with some siginfo data added:
'sysctl -a' also got hanged up, then, and 'zfs list' was simply unusable for that time.
At some point, this showed up in the single-user dmesg output:
and then that disappeared from the dmesg output. Shortly later, it was replaced with this:
Assuming that the ZFS system may've been re-writing some data after the recent volume transfers and volume deletions, and looking at what was showing up from the siginfo output, I waited it out. Eventually, I was able to continue booting the machine.
Now that that machine has successfully booted, this is what gpart shows for the main active volume and snapshots there:
Though it seems that there's a workaround of some kind, in that, I'm concerned about those error messages, however, and the delay that this had introduced into the boot process.
Before rebooting this machine again, I plan on changing the volmode for those snapshots to volmode=none. Hopefully that'll prevent it from getting hanged up on the device mapping (??) for those ZFS volume snapshots, at any later time.
Considering that there may be something of a workaround for it, maybe this would be said to be an expected behavior of ZFS under one specific configuration? I thought it might be worth noting, however. I can try to provide any further debug output, if possible. While the 'zpool list, 'sysctl -a' and 'zfs list' commands where hanged up, I did start a ktrace on each. I've not reviewed what that captured yet, however.
Otherwise, it seems to be working out really well, this ZFS configuration on an SSD on an old Toshiba laptop.
Update
It seems that it's not possible to set volmode=none on a snapshot. I'm afraid that those will always show up under /dev/zvol, and will always be accessed for device mapping, on this machine. Perhaps this is a bug?
This is the list of present snapshots for that device. It seems that one of the two failed, for some reason, during the initial pool activation.
If it may be a workaround, I'll try setting volmode=none when making any further snapshots of that volume. Perhaps this will keep those snapshots from showing up under /dev/zvol/
Code:
FreeBSD sol.cloud.thinkum.space 12.3-STABLE FreeBSD 12.3-STABLE stable/12-n1855-ce99de0241e RIPARIAN amd64
One of these FreeBSD installations is on an old Toshiba laptop. It's been pretty well usable with the FreeBSD installation, beside the Debian 10 and Windows 7 installations on the laptop. (Debian 10 doesn't seem to work out well with the laptop's legacy Nvidia hardware)
The root filesystem for the FreeBSD installation on this laptop is on a UFS filesystem. It's a multiboot machine, using the Grub bootloader. I've not been able to test out any multiboot configurations with ZFS on root, under a MBR partition table, and Grub.
After cleaning up some files in the Debian installation, then to make use of some of the extra space on the laptop's recently-added SSD, I've created a new ZFS pool on the SSD. This ZFS pool has been used mainly for holding disk images for VirtualBox.
Initially, I was simply storing the VDI files under the ZFS pool. More recently, after some tinkering with ZFS under bhyve and vm-bhyve, I was able to convert one of those VirtualBox VDI files into a disk image on a ZFS volume (sparse, volmode=geom). After converting the VDI file to a non-dynamic VDI file (fixed size) using VBoxManage, then to a raw disk image, using qemu-image, I copied all of the filesystem data into the ZFS volume (sparse, volmode=geom), using ddpt. The disk is accessed in VirtualBox via a VirtualBox vmdk file created with VBoxManage.
Code:
VBoxManage internalcommands createrawvmdk -filename estragon01.vmdk -rawdisk /dev/zvol/zstor/img/vm/vdi_estragon/estragon01
I've configured devfs to allow write access to the corresponding /dev/zvol/** geom, for users in the wheel group.
I've enabled skein deduplication on the ZFS filesystem containing the corresponding ZFS geom volume. After some initial filesystem shuffling, it's worked out quite well - up until when I rebooted the machine, this afternoon.
Before rebooting, I'd noticed some messages like the following, in the dmesg output and under /var/log/messages:
Code:
Jan 28 17:42:25 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm.prev/vdi_estragon/estragon01@vm-checkpoint-2022-01-26, error=63)
Jan 28 17:42:25 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm.prev/vdi_estragon/estragon01@2022-01-28T1430s1, error=63)
Jan 28 17:42:25 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm.prev/vdi_estragon/estragon01@2022-01-28T1430s2, error=63)
Jan 28 17:42:25 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm.prev/vdi_estragon/estragon01@vm-checkpoint-2022-01-26s1, error=63
Jan 28 17:42:25 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm.prev/vdi_estragon/estragon01@vm-checkpoint-2022-01-26s2, error=63)
Jan 28 17:51:56 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26, error=63)
Jan 28 17:51:56 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26s1, error=63)
Jan 28 17:51:56 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26s2, error=63)
With some debug logging configured for devd, here's an example of what was showing up under /var/log/messages:
Code:
Jan 28 17:51:56 sol root[12997]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=CREATE cdev=zvol/zstor/img/vm/vdi_estragon/estragon01@2022-01-28T1430
Jan 28 17:51:56 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26, error=63)
Jan 28 17:51:56 sol root[13006]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=CREATE cdev=zvol/zstor/img/vm/vdi_estragon/estragon01
Jan 28 17:51:56 sol root[13007]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/estragon01@2022-01-28T1430
Jan 28 17:51:56 sol root[13008]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/estragon01@vm-checkpoint-2022-01-26
Jan 28 17:51:56 sol root[13009]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/estragon01
Jan 28 17:51:56 sol root[13010]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/estragon01@2022-01-28T1430s1
Jan 28 17:51:56 sol root[13011]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/estragon01@2022-01-28T1430s2
Jan 28 17:51:56 sol root[13012]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/estragon01@vm-checkpoint-2022-01-26s1
Jan 28 17:51:56 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26s1, error=63)
Jan 28 17:51:56 sol root[13013]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/estragon01@vm-checkpoint-2022-01-26s2
Jan 28 17:51:56 sol root[13014]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/estragon01s1
Jan 28 17:51:56 sol kernel: g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26s2, error=63)
Jan 28 17:51:56 sol root[13015]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/estragon01s2
Jan 28 17:51:56 sol root[13016]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=CREATE cdev=zvol/zstor/img/vm/vdi_estragon/estragon01@2022-01-28T1430s1
Jan 28 17:51:56 sol root[13017]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=CREATE cdev=zvol/zstor/img/vm/vdi_estragon/estragon01@2022-01-28T1430s2
Jan 28 17:51:57 sol root[13018]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=CREATE cdev=zvol/zstor/img/vm/vdi_estragon/estragon01s1
Jan 28 17:51:57 sol root[13019]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=CREATE cdev=zvol/zstor/img/vm/vdi_estragon/estragon01s2
Jan 28 17:51:58 sol root[13020]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/vdi_estragon/estragon01s1
Jan 28 17:51:58 sol root[13021]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=DESTROY cdev=zvol/zstor/img/vm/vdi_estragon/estragon01s2
Jan 28 17:51:58 sol root[13022]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=MEDIACHANGE cdev=zvol/zstor/img/vm/vdi_estragon/estragon01
Jan 28 17:51:58 sol root[13023]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=CREATE cdev=zvol/zstor/img/vm/vdi_estragon/estragon01s1
Jan 28 17:51:58 sol root[13024]: devd devd.conf GEOM//DEV !system=GEOM subsystem=DEV type=CREATE cdev=zvol/zstor/img/vm/vdi_estragon/estragon01s2
That much - in the last part of it - was probably from when I was renaming filesytems under zstor/img/vm. At one point in the process of some filesystem-shuffling. I'd moved a set of filesystems under zstor/img/vm to zstor/img/vm.previous and then moved a set of newly re-transferred datasets from zstor/img/vm2 to under zstor/img/vm/. Curiously, the volume snapshots under the vm2 and vm.previous prefixes never showed up in any of that dmesg output,.
I was using BASH in tmux under mate-terminal, so I'd not noticed the kernel messages on the console.
The poweroff for reboot itself got hanged up for a little while, then. Eventually, it finished after all the buffers were sync'd to disk.
I've seen these 'make_dev_p() failed' messages only for snapshots of the volmode=geom volume that I'm now using with VirtualBox. With the main filesystem volume for that datatest recently re-initialized with dedup=skein enabled, and then zfs send/zfs receive to backup and re-write the volume under dedup=skein, I'd booted to the newly re-written VirtualBox virtual disk and it worked out quite well.
Of course, at one point, it took around 24 hours to transfer 64 GB of data for that, after some initial gigabytes of ZFS filesystem data had been transferred in a matter of minutes. I wasn't able to debug it by a lot, though. Eventually, the data transferred and I could move on with the rest of this configuration.
After rebooting - ultimately rebooting into single user mode - then when trying to run 'zpool list,' I noticed that the 'zpool list' cmd was hanging. Here's an example, with some siginfo data added:
Code:
[root@ ~]# zpool list
load: 0.09 cmd: zpool 129 [hdr->b_l1hdr.b_cv] 4.51r 0.00u 0.49s 3% 3504k
load: 0.08 cmd: zpool 129 [tx->tx_sync_done_cv] 8.83r 0.00u 0.80s 4% 3504k
load: 0.08 cmd: zpool 129 [tq_adrain] 11.24r 0.00u 0.82s 3% 3504k
load: 0.24 cmd: zpool 129 [tq_adrain] 13.18r 0.00u 0.82s 2% 3504k
load: 0.24 cmd: zpool 129 [tq_adrain] 14.09r 0.00u 0.82s 2% 3504k
load: 0.24 cmd: zpool 129 [tq_adrain] 14.72r 0.00u 0.82s 2% 3504k
load: 0.24 cmd: zpool 129 [tq_adrain] 15.26r 0.00u 0.82s 2% 3504k
load: 0.24 cmd: zpool 129 [tq_adrain] 15.71r 0.00u 0.82s 1% 3504k
load: 0.30 cmd: zpool 129 [tq_adrain] 16.62r 0.00u 0.82s 1% 3504k
load: 0.33 cmd: zpool 129 [tq_adrain] 40.52r 0.00u 0.82s 0% 3504k
load: 0.25 cmd: zpool 129 [tq_adrain] 76.07r 0.00u 0.82s 0% 3504k
load: 0.59 cmd: zpool 129 [tq_adrain] 291.70r 0.00u 0.82s 0% 3504k
load: 0.33 cmd: zpool 129 [g_waitidle] 331.52r 0.00u 0.82s 0% 3504k
load: 0.30 cmd: zpool 129 [g_waitidle] 335.23r 0.00u 0.82s 0% 3504k
load: 0.30 cmd: zpool 129 [g_waitidle] 336.43r 0.00u 0.82s 0% 3504k
load: 0.30 cmd: zpool 129 [g_waitidle] 337.01r 0.00u 0.82s 0% 3504k
'sysctl -a' also got hanged up, then, and 'zfs list' was simply unusable for that time.
At some point, this showed up in the single-user dmesg output:
Code:
g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26, error=63)
and then that disappeared from the dmesg output. Shortly later, it was replaced with this:
Code:
g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26s1, error=63)
g_dev_taste: make_dev_p() failed (gp->name=zvol/zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26s2, error=63)
Assuming that the ZFS system may've been re-writing some data after the recent volume transfers and volume deletions, and looking at what was showing up from the siginfo output, I waited it out. Eventually, I was able to continue booting the machine.
Now that that machine has successfully booted, this is what gpart shows for the main active volume and snapshots there:
Code:
=> 63 134217665 zvol/zstor/img/vm/vdi_estragon/estragon01 MBR (64G)
63 1985 - free - (993K)
2048 1185792 1 ntfs [active] (579M)
1187840 133025792 2 ntfs (63G)
134213632 4096 - free - (2.0M)
=> 63 134217665 zvol/zstor/img/vm/vdi_estragon/estragon01@2022-01-28T1430 MBR (64G)
63 1985 - free - (993K)
2048 1185792 1 ntfs [active] (579M)
1187840 133025792 2 ntfs (63G)
134213632 4096 - free - (2.0M)
=> 63 134217665 zvol/zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26 MBR (64G)
63 1985 - free - (993K)
2048 1185792 1 ntfs [active] (579M)
1187840 133025792 2 ntfs (63G)
134213632 4096 - free - (2.0M)
Though it seems that there's a workaround of some kind, in that, I'm concerned about those error messages, however, and the delay that this had introduced into the boot process.
Before rebooting this machine again, I plan on changing the volmode for those snapshots to volmode=none. Hopefully that'll prevent it from getting hanged up on the device mapping (??) for those ZFS volume snapshots, at any later time.
Considering that there may be something of a workaround for it, maybe this would be said to be an expected behavior of ZFS under one specific configuration? I thought it might be worth noting, however. I can try to provide any further debug output, if possible. While the 'zpool list, 'sysctl -a' and 'zfs list' commands where hanged up, I did start a ktrace on each. I've not reviewed what that captured yet, however.
Otherwise, it seems to be working out really well, this ZFS configuration on an SSD on an old Toshiba laptop.
Update
It seems that it's not possible to set volmode=none on a snapshot. I'm afraid that those will always show up under /dev/zvol, and will always be accessed for device mapping, on this machine. Perhaps this is a bug?
Code:
$ zfs set volmode=none zstor/img/vm/vdi_estragon/estragon01@2022-01-28T1430
cannot set property for 'zstor/img/vm/vdi_estragon/estragon01@2022-01-28T1430': this property can not be modified for snapshots
$ find /dev/zvol -name '*@*'
/dev/zvol/zstor/img/vm/vdi_estragon/estragon01@2022-01-28T1430
/dev/zvol/zstor/img/vm/vdi_estragon/estragon01@2022-01-28T1430s1
/dev/zvol/zstor/img/vm/vdi_estragon/estragon01@2022-01-28T1430s2
This is the list of present snapshots for that device. It seems that one of the two failed, for some reason, during the initial pool activation.
Code:
$ zfs list -r -t snapshot zstor/img/vm/vdi_estragon/estragon01
NAME USED AVAIL REFER MOUNTPOINT
zstor/img/vm/vdi_estragon/estragon01@vm-checkpoint-2022-01-26 8.61G - 63.6G -
zstor/img/vm/vdi_estragon/estragon01@2022-01-28T1430 1008M - 64.1G -
If it may be a workaround, I'll try setting volmode=none when making any further snapshots of that volume. Perhaps this will keep those snapshots from showing up under /dev/zvol/