Solved VM sparse ZVOL

Can anyone post a working example of a vm configuration booting from a ZVOL device as primary disk. Let's say I create it like this: zfs create -sV 100G -o volmode=dev "mypool/vm/myvm/disk0", how do I expose it in my vm configuration file?

SOLVED:

The following lines must be added to the configuration file:
Code:
disk0_name="disk0"
disk0_dev="sparse-zvol"
disk0_type="virtio-blk"

With this config vm(8) will utilize zfs(8) to create a sparse volume ( volmode=dev) used as virtio(4) device named disk0 under your vm dataset. Volume size can be adjusted with volsize=XY afterwards.
 
With sysutils/vm-bhyve create a proper template. Then you can simply do vm create -d stor10k -t freebsd-zvol -c 2 -m 4096M freebsd-test and have it create a sparse ZVOL automatically. No need to do this by hand.

I also created different datastores:
Code:
# vm datastore list
NAME            TYPE        PATH                      ZFS DATASET
default         zfs         /vm                       zroot/DATA/vm
stor10k         zfs         /storage/vm               stor10k/DATA/vm

The templates are stored in /vm/.templates, you can find examples in /usr/local/share/examples/vm-bhyve/.
Code:
# ll /vm/.templates/
total 55
-rw-r--r--  1 root  wheel  487 Jan  6  2018 centos7.conf
-rw-r--r--  1 root  wheel  172 Jan  7  2018 debian.conf
lrwxr-xr-x  1 root  wheel   17 Jan  6  2018 default.conf@ -> freebsd-zvol.conf
-rw-r--r--  1 root  wheel  177 Jan  7  2018 freebsd-uefi-zvol.conf
-rw-r--r--  1 root  wheel  248 Feb  4  2018 freebsd-zvol.conf
-rw-r--r--  1 root  wheel  131 Jan  6  2018 ubuntu.conf
-rw-r--r--  1 root  wheel  123 Jan  6  2018 windows.conf

(NB question split off to its own thread, the question had nothing to do with the old thread it was posted under)
 
I'm sorry if this is a stupid question. I'm just trying to understand the zvol creation portion.

Bash:
➜  .templates vm datastore list
NAME            TYPE        PATH                      ZFS DATASET
default         zfs         /scrap/vm                 scrap/vm
➜  .templates vm create -d scrap/vm -t freebsd-zvol freebsd-test
/usr/local/sbin/vm: ERROR: unable to load datastore - 'scrap/vm'
➜  .templates vm create -d scrap -t freebsd-zvol freebsd-test
/usr/local/sbin/vm: ERROR: unable to load datastore - 'scrap'
➜  .templates zfs list scrap
NAME    USED  AVAIL     REFER  MOUNTPOINT
scrap  1.45G   448G      742M  /scrap
 
vm datastore list shows you only have a datastore called 'default'. So -d scrap points to a non-existent datastore.
 
Was wondering if anyone can list the steps command by command to create a sparse image vm? I already have a zroot/bhyve zfs folder holding my vm’s in standard disk0.img with conf files. To use a sparse image would I have to start over?

Thanks
 
Code:
root@hosaka:~ # cat /vm/.templates/freebsd-zvol.conf
utctime="yes"
loader="bhyveload"
cpu=1
memory=512M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0"
disk0_dev="sparse-zvol"
#zfs_dataset_opts="compress=off"
zfs_zvol_opts="volblocksize=8k compress=on"


I already have a zroot/bhyve zfs folder holding my vm’s in standard disk0.img with conf files. To use a sparse image would I have to start over?
Yes, it will need to be recreated.
 
I think using exactly the same size as the existing image file, it should be possible to copy the contents with dd(1).
Never tried it but that should work, yes. It's just a lot simpler if you recreate the VM.
 
Never tried it but that should work, yes. It's just a lot simpler if you recreate the VM.
So I could run "zfs create -sV 50G -o volmode=dev path/to/dataset/zvol" pointing to a new zvol root folder outside of my existing zroot and then reference it in rc.conf as the new vmm directory?
 
I guess with vmm directory, you mean the directory that contains your .img file?

Then, you probably have other files there as well, so create the zvol (that will only replace the img file) as a child there. If you use sysutils/vm-bhyve, you only have to change the configuration of the virtual harddisk to something like this:
Code:
disk0_type="virtio-blk"
disk0_name="disk0"
disk0_dev="sparse-zvol"
(of course adapt type and name to what you actually use)
 
Example:
Code:
root@hosaka:~ # env EDITOR=cat vm configure lady3jane
utctime="yes"
loader="bhyveload"
cpu="4"
memory="8192M"
network0_type="virtio-net"
network0_switch="servers"
disk0_type="virtio-blk"
disk0_name="disk0"
disk0_dev="sparse-zvol"
disk1_type="virtio-blk"
disk1_name="disk1"
disk1_dev="sparse-zvol"
#zfs_dataset_opts="compress=off"
zfs_zvol_opts="volblocksize=4k compress=off"
uuid="05f2fc77-580c-11eb-bcdc-002590f15838"
network0_mac="58:9c:fc:03:af:02"
The disk0 and disk1 refer to these volumes:
Code:
root@hosaka:~ # zfs list -r zroot/DATA/vm/lady3jane
NAME                            USED  AVAIL     REFER  MOUNTPOINT
zroot/DATA/vm/lady3jane        64.8G  11.9G      148K  /vm/lady3jane
zroot/DATA/vm/lady3jane/disk0  19.5G  11.9G     19.5G  -
zroot/DATA/vm/lady3jane/disk1  45.3G  11.9G     45.3G  -

That /vm/lady3jane is also the directory where vm(8) stores the configuration of the VM, so it makes sense to store your zvols there too, to keep everything together.
Code:
root@hosaka:~ # ll /vm/lady3jane
total 76
-rw-r--r--  1 root  wheel     28 Feb 26 00:08 console
-rw-r--r--  1 root  wheel    398 Feb 14  2021 lady3jane.conf
-rw-r--r--  1 root  wheel     20 Feb 26 00:08 run.lock
-rw-r--r--  1 root  wheel  88598 Feb 26 00:38 vm-bhyve.log
It's running now, so there's a run.lock there. The *.conf file is what you edit with vm configure .... The console file is a reference to the connected nmdm(4) device, used for the (serial) console. The *.log is, ehm, well, it hopefully speaks for itself.
 
Well I created a sparse zvol using zfs create -s V and gpart to initialize it as a gpt disk. This exists under /dev/zvol/pathtosparse. I tried DD to image my existing VM inside /zroot/bhyve/winserver/disk0.img to the /dev/zvol path and changed things around my existing winserver.conf to referrence the /dev/zvol/path with virtio-blk selected and when I start up the VM it boots to blue screen inacessible boot device error. So I tried attaching the zvol as a second disk and booted working windows image and after installing virtio blk driver and use macrium to image onto the new block storage device and got same error when I tried to boot . Not sure how to fix.

Thanks
 
So I finally was able to fix my issues getting the VM image cloned onto the sparse DEV ZVol device and was
curious about the performance differences and decided to run the Anvil performance benchmark to see the
differences. I was shocked to find that the "nvme" mode was significantly faster. Mostly in terms of write
speed it was nearly four times faster in "nvme" mode!

Not sure why the write speed was so much slower as the Zvol should have been spread among four RAIDZ1
drives. Maybe write amplification was at work or it was double caching or something. Will have to retest later
when I figure out how to put in optimizations in the conf file if it would help.


Here is the Virtio-BLK device score:
IMG-0632.jpg


NVME Setting score:

IMG-0633.jpg
 
Back
Top