I'm trying to create a ZFS pool on a sparse zvol. The
In some light testing, I was able to run 'newfs' to add a UFS2 filesystem to tmphdp1. I ran
Ideally, a ZFS pool on some sparse ZFS volume could be used in a sort of sequential file sharing with any ZFS-capable system running under vm-bhyve? Candidly, I'm mystified as to why the 'zpool create' command is failing. The host is booted with root on ZFS.
I'm seeing the same error message, when creating the initial ZFS volume without the 'sparse' flag '-s', as beginning with the following:
After recreating the partition table, it still fails the same, sparse or not:
I've read that the ZFS support in FreeBSD 13 might be worth taking a look at. I wonder if this shell script would result in the same error condition, there?
A geom exists on the sparse zvol, and yet there is that error message from zpool create. I also ran the following here, for some light testing:
Properties on the zfs path for the zvol:
Maybe it would work out differently if assigning the sparse zvol to some FreeBSD environment running under bhyve, then creating a pool from within that virtualization environment, using a vdev named `gpt/some.label`. Ideally, that GPT label might be visible to the system under Bhyve when it's running, and available to the system hosting the Bhyve instance when the zpool is not otherwise in use.
Assuming that works out, but it shouldn't be necessary to run any intermediate VM if for simply creating the zpool?
Update
After adding the zvols as disks for an existing installation of this FreeBSD 12.3 under bhyve, then booting the bhyve VM with those zvols attached, I created a ZFS pool on each of the sparse zvols under that bhyve VM, using the gpt ID assigned to the primary partition for the GPT layout crated under each zvol. After the pools were crated, I then exported each zvol's zpool before shutting the VM off.
The pools are then inaccessible for 'zpool import' on the host machine. On the bhyve host, I'm seeing such as the following for both of the pools on each zvol:
For the geom of the sole vdev in that zpool, the geom's GPT ID exists on the host, yet it seems inaccessible for zpool import
An excerpt from 'glabel list' on the host:
There's no gptid displayed for the same vol under the VM for FreeBSD 12.3 in bhyve.
If the pool created under the bhyve VM is not accessible to the VM host, I suppose this won't work out for any file sharing with the VM environment, after all.
This would have ideally been used for interop with a Linux VM for building with pkgsrc, targeting an OpenSuSE Tumbelweed environment but with the builder managed with vm-bhyve under FreeBSD. Considering that ZFS is well supported under FreeBSD and Linux systems, ZFS may seem to have been the ideal filesystem for this scenario.
I wonder if this could work out any differently if the zvol was provided to the Bhyve VM via iSCSI? imo the ideal outcome would be to produce a zpool on a zvol that can be used sequentially under the Bhyve VM and on the Bhyve host. Here, at least it's usable under the Bhyve VM - this can work out for
zpool create
command is persistently failing with the message, "<poolname>: no such pool or dataset," for any <poolname>. This is with a local build of FreeBSD 12.3
Code:
# uname -a
FreeBSD riparian.cloud.thinkum.space 12.3-STABLE FreeBSD 12.3-STABLE stable/12-n1855-ce99de0241e RIPARIAN amd64
# zfs create -s -V 32G zroot/opt/builder/tmphd
# gpart create -s gpt /dev/zvol/zroot/opt/builder/tmphd
zvol/zroot/opt/builder/tmphd created
# gpart add -l some.label -t freebsd-zfs /dev/zvol/zroot/opt/builder/tmphd
zvol/zroot/opt/builder/tmphdp1 added
# zpool create tmp01 /dev/zvol/zroot/opt/builder/tmphdp1
cannot create 'tmp01': no such pool or dataset
In some light testing, I was able to run 'newfs' to add a UFS2 filesystem to tmphdp1. I ran
zfs destroy zroot/opt/builder/tmphd
before recreating the volume, as there. It seems that the partition under the sparse volume is usable. I don't know why 'zpool create' is producing that error message.Ideally, a ZFS pool on some sparse ZFS volume could be used in a sort of sequential file sharing with any ZFS-capable system running under vm-bhyve? Candidly, I'm mystified as to why the 'zpool create' command is failing. The host is booted with root on ZFS.
I'm seeing the same error message, when creating the initial ZFS volume without the 'sparse' flag '-s', as beginning with the following:
Code:
zfs create -V 128M zroot/opt/builder/tmphd
After recreating the partition table, it still fails the same, sparse or not:
Code:
# zpool create tmp01 /dev/zvol/zroot/opt/builder/tmphdp1
cannot create 'tmp01': no such pool or dataset
I've read that the ZFS support in FreeBSD 13 might be worth taking a look at. I wonder if this shell script would result in the same error condition, there?
A geom exists on the sparse zvol, and yet there is that error message from zpool create. I also ran the following here, for some light testing:
Code:
# zpool create -f tmp01 zvol/zroot/opt/builder/tmphdp1
cannot create 'tmp01': no such pool or dataset
# zpool create -f tmp01 zroot/opt/builder/tmphdp1
cannot open 'zroot/opt/builder/tmphdp1': no such GEOM provider
must be a full path or shorthand device name
# zpool create -n tmp01 /dev/zvol/zroot/opt/builder/tmphdp1
would create 'tmp01' with the following layout:
tmp01
zvol/zroot/opt/builder/tmphdp1
Properties on the zfs path for the zvol:
Code:
# zfs destroy zroot/opt/builder/tmphd
# zfs create -s -V 32G zroot/opt/builder/tmphd
# zfs get all zroot/opt/builder/tmphd
NAME PROPERTY VALUE SOURCE
zroot/opt/builder/tmphd type volume -
zroot/opt/builder/tmphd creation Sun Mar 27 13:12 2022 -
zroot/opt/builder/tmphd used 56K -
zroot/opt/builder/tmphd available 184G -
zroot/opt/builder/tmphd referenced 56K -
zroot/opt/builder/tmphd compressratio 1.00x -
zroot/opt/builder/tmphd reservation none default
zroot/opt/builder/tmphd volsize 32G local
zroot/opt/builder/tmphd volblocksize 8K default
zroot/opt/builder/tmphd checksum skein inherited from zroot
zroot/opt/builder/tmphd compression lz4 inherited from zroot
zroot/opt/builder/tmphd readonly off default
zroot/opt/builder/tmphd createtxg 2463250 -
zroot/opt/builder/tmphd copies 1 default
zroot/opt/builder/tmphd refreservation none default
zroot/opt/builder/tmphd guid 16296258702259891628 -
zroot/opt/builder/tmphd primarycache all default
zroot/opt/builder/tmphd secondarycache all default
zroot/opt/builder/tmphd usedbysnapshots 0 -
zroot/opt/builder/tmphd usedbydataset 56K -
zroot/opt/builder/tmphd usedbychildren 0 -
zroot/opt/builder/tmphd usedbyrefreservation 0 -
zroot/opt/builder/tmphd logbias latency default
zroot/opt/builder/tmphd objsetid 1.78K -
zroot/opt/builder/tmphd dedup off default
zroot/opt/builder/tmphd mlslabel -
zroot/opt/builder/tmphd sync standard default
zroot/opt/builder/tmphd refcompressratio 1.00x -
zroot/opt/builder/tmphd written 56K -
zroot/opt/builder/tmphd logicalused 26K -
zroot/opt/builder/tmphd logicalreferenced 26K -
zroot/opt/builder/tmphd volmode default default
zroot/opt/builder/tmphd snapshot_limit none default
zroot/opt/builder/tmphd snapshot_count none default
zroot/opt/builder/tmphd redundant_metadata all default
Maybe it would work out differently if assigning the sparse zvol to some FreeBSD environment running under bhyve, then creating a pool from within that virtualization environment, using a vdev named `gpt/some.label`. Ideally, that GPT label might be visible to the system under Bhyve when it's running, and available to the system hosting the Bhyve instance when the zpool is not otherwise in use.
Assuming that works out, but it shouldn't be necessary to run any intermediate VM if for simply creating the zpool?
Update
After adding the zvols as disks for an existing installation of this FreeBSD 12.3 under bhyve, then booting the bhyve VM with those zvols attached, I created a ZFS pool on each of the sparse zvols under that bhyve VM, using the gpt ID assigned to the primary partition for the GPT layout crated under each zvol. After the pools were crated, I then exported each zvol's zpool before shutting the VM off.
The pools are then inaccessible for 'zpool import' on the host machine. On the bhyve host, I'm seeing such as the following for both of the pools on each zvol:
Code:
# zpool import
pool: loam.pkgsrc
id: 12943761211833745840
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://illumos.org/msg/ZFS-8000-3C
config:
loam.pkgsrc UNAVAIL insufficient replicas
15653153680549169382 UNAVAIL cannot open
# zdb -l /dev/zvol/zroot/opt/builder/pkgsrc/loam/treep1
------------------------------------
LABEL 0
------------------------------------
version: 5000
name: 'loam.pkgsrc'
state: 1
txg: 66
pool_guid: 12943761211833745840
hostid: 3441953979
hostname: 'vm-a'
top_guid: 15653153680549169382
guid: 15653153680549169382
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 15653153680549169382
path: '/dev/gpt/loam.tree'
whole_disk: 1
metaslab_array: 67
metaslab_shift: 29
ashift: 12
asize: 34350825472
is_log: 0
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
------------------------------------ [52/3823]
LABEL 1
------------------------------------
version: 5000
name: 'loam.pkgsrc'
state: 1
txg: 66
pool_guid: 12943761211833745840
hostid: 3441953979
hostname: 'vm-a'
top_guid: 15653153680549169382
guid: 15653153680549169382
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 15653153680549169382
path: '/dev/gpt/loam.tree'
whole_disk: 1
metaslab_array: 67
metaslab_shift: 29
ashift: 12
asize: 34350825472
is_log: 0
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
------------------------------------
LABEL 2
------------------------------------
version: 5000
name: 'loam.pkgsrc'
state: 1
txg: 66
pool_guid: 12943761211833745840
hostid: 3441953979
hostname: 'vm-a'
top_guid: 15653153680549169382
guid: 15653153680549169382
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 15653153680549169382
path: '/dev/gpt/loam.tree'
whole_disk: 1
metaslab_array: 67
metaslab_shift: 29
ashift: 12
asize: 34350825472
is_log: 0
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
------------------------------------
LABEL 3
------------------------------------
version: 5000
name: 'loam.pkgsrc'
state: 1
txg: 66
pool_guid: 12943761211833745840
hostid: 3441953979
hostname: 'vm-a'
top_guid: 15653153680549169382
guid: 15653153680549169382
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 15653153680549169382
path: '/dev/gpt/loam.tree'
whole_disk: 1
metaslab_array: 67
metaslab_shift: 29
ashift: 12
asize: 34350825472
is_log: 0
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
For the geom of the sole vdev in that zpool, the geom's GPT ID exists on the host, yet it seems inaccessible for zpool import
Code:
# ls -l /dev/gpt/loam.tree
crw-r----- 1 root operator 0x179 Mar 27 14:41 /dev/gpt/loam.tree
An excerpt from 'glabel list' on the host:
Code:
Geom name: zvol/zroot/opt/builder/pkgsrc/loam/treep1
Providers:
1. Name: gpt/loam.tree
Mediasize: 34355544064 (32G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 67100672
length: 34355544064
index: 0
Consumers:
1. Name: zvol/zroot/opt/builder/pkgsrc/loam/treep1
Mediasize: 34355544064 (32G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
Geom name: zvol/zroot/opt/builder/pkgsrc/loam/treep1
Providers:
1. Name: gptid/a9a96e93-adfd-11ec-95d4-c4346b48459d
Mediasize: 34355544064 (32G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 67100672
length: 34355544064
index: 0
Consumers:
1. Name: zvol/zroot/opt/builder/pkgsrc/loam/treep1
Mediasize: 34355544064 (32G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
There's no gptid displayed for the same vol under the VM for FreeBSD 12.3 in bhyve.
Code:
Geom name: vtbd1p1
Providers:
1. Name: gpt/loam.tree
Mediasize: 34355544064 (32G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e1
secoffset: 0
offset: 0
seclength: 67100672
length: 34355544064
index: 0
Consumers:
1. Name: vtbd1p1
Mediasize: 34355544064 (32G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e2
If the pool created under the bhyve VM is not accessible to the VM host, I suppose this won't work out for any file sharing with the VM environment, after all.
This would have ideally been used for interop with a Linux VM for building with pkgsrc, targeting an OpenSuSE Tumbelweed environment but with the builder managed with vm-bhyve under FreeBSD. Considering that ZFS is well supported under FreeBSD and Linux systems, ZFS may seem to have been the ideal filesystem for this scenario.
I wonder if this could work out any differently if the zvol was provided to the Bhyve VM via iSCSI? imo the ideal outcome would be to produce a zpool on a zvol that can be used sequentially under the Bhyve VM and on the Bhyve host. Here, at least it's usable under the Bhyve VM - this can work out for
zfs send
under the VM environment