bhyve Enhanced version of "vm image list"

I created standalone version of alternative script to vm image list to:
  1. add SIZE_MB column to print compressed image size
  2. sort items by NAME,CREATED (original command sorts by UUID only)
  3. make script much faster than vm image list thanks to using single source instead of several sysrc calls - at the potential expense of security (*.manifest file must be well formed to avoid shell hijack and/or error)
Here is example output (redacted):
Code:
$ /opt/sbin/vm-iml.sh

UUID                                  NAME               CREATED          SIZE_MB  DESCRIPTION
7de44d22-524d-11f0-89ca-4cd717a0f39c  alpine1            20250626-072202       59  No description provided
6e03ad04-445c-4bf6-8438-70b2173aa88d  alpine2-nat        20250627-093946       59  No description provided
4a8db133-5272-11f0-afbf-f8b1569b53fe  alpine3-priv       20250626-114527       80  Alpine in private network
1de4459e-613e-11f0-9490-4cd717a0f39c  arch-gcc           20250715-073946     2413  ArchLinux with gcc15,cmake4
1bb3a2d3-5e52-11f0-a8b8-4cd717a0f39c  deb12-gccbox       20250711-142519     6195  distros with ...
456a0daf-5c98-11f0-83a4-4cd717a0f39c  deb12-min          20250709-094231      506  clean install

Here is script code (called vm-iml.sh):
Bash:
#!/bin/sh
# list VM images with ZFS archive size and sorted by NAME,CREATED
# It is like "advanced" version of "vm image list" command
# Requirements: installed and configured "vm-bhyve" package
# Copyright: many parts come from /usr/local/lib/vm-bhyve/vm-* scripts
set -euo pipefail

errx() {
    echo "ERROR: $@" >&2
    exit 1
}

# extract ZFS dataset for vm-bhyve "datastore" configuration
extract_zfs_ds() {
    local vm_dir vm_ds
    vm_dir="$(sysrc -n vm_dir)"
    [ -n "$vm_dir" ] || errx "No vm_dir variable defined in /etc/rc.conf"
    [ "${vm_dir%%:*}" = "zfs" ] || errx "vm_dir='$vm_dir' has no 'zfs:' prefix"
    vm_ds="${vm_dir#zfs:}"
    [ -n "$vm_ds" ] || errx "Unable to extract ZFS dataset name from vm_dir='$vm_dir'"
    echo "$vm_ds"
}

vm_ds=$( extract_zfs_ds )
vm_dir=$(mount | grep "^${vm_ds} " |cut -d' ' -f3)
[ -n "$vm_dir" ] || errx "Unable to find mount point for ZFS dataset '$vm_ds'"
[ "${vm_dir#/}" != "$vm_dir" ] || errx "Mount point '$vm_dir' does not start with '/'"
im_dir="$vm_dir/images"
[ -d "$im_dir" ] || errx "Unable to find Image dir '$im_dir' under '$vm_dir'"

_formath='%s^%s^%s^%s^%s\n'
_format='%s^%s^%s^%7d^%s\n'

# top level block to align '^' separated output to columns
{
    printf "${_formath}" "UUID" "NAME" "CREATED" "SIZE_MB" "DESCRIPTION"
    # nested block to properly sort output data by NAME,CREATED
    {
        ls -1 ${vm_dir}/images/ | \
        while read _file; do
            if [ "${_file##*.}" = "manifest" ]; then
            _uuid=${_file%.*}
            # NOTE: sourcing with '.' is much faster than several calls of sysrc
            . "${vm_dir}/images/${_uuid}.manifest"
            # convert date to ASCII sortable
            sortable_created=$( date -j -f '%+' '+%Y%m%d-%H%M%S' "${created}" )
            # get file size of compressed ZFS dataset
            zfs_size=$( stat -f "%z" "${vm_dir}/images/$_uuid.zfs.z" )
            zfs_size_mb=$(( $zfs_size / 1024 / 1024 ))
            printf "${_format}" "${_uuid}" "${name}" "${sortable_created}" "${zfs_size_mb}" "${description}"
            fi
        done
    } | sort -t ^ -k 2,3
} | column -ts^

exit 0

Latest version can be found on my GitHub project: https://github.com/hpaluch-pil/freebsd-scripts/blob/master/vm-bhyve/vm-iml.sh

Disclaimer: script works only with regular ZFS dataset, tested /etc/rc.conf value: vm_dir="zfs:zbsd/vm-bhyve"

I'm curious if anybody else may find this script useful.
 
I'm curious if anybody else may find this script useful.

It doesn't work for me, we have a different layout there is no directory ${vm_dir}/images in my configuration.
In my case it goes like this $vm_dir/vm_name/disk0.img, as you can see disks are in a separate directory identified by the name of the VM while for you it looks like they are all in the same place.

set -o pipefail doesn't work for sh.

Personally I have 2 functions to help me figuring out what is the size of a VM's disk
Code:
~ > # vm_disk() { i=$(ls -lh /vm/$1/*img | awk '{print $5, $9}' | sed 's/disk0.img//g' | sed 's/vm//g' | sed 's#/##g') ; echo $i }
~ > # vm_disk_all() { i=$(ls -lh /vm/*/*img | awk '{print $5, $9}' | sed 's/disk0.img//g' | sed 's/vm//g' | sed 's#/##g') ; echo $i }
~ > 
~ > vm_disk fbsd14
40G fbsd14
~ > 
~ > vm_disk_all
40G fbsd13b
20G fbsd13d
40G fbsd14
....
 
vm-bhyve are also only shell scripts, so why not add it to the vm list command (e.g. as '-v') and commit that upstream as a PR?

OTOH - are you sure this is for sysutils/vm-bhyve? I am using it on several hosts, but as gotnull already pointed out, there is no ${vm_dir}/images path and vm-bhyve also doesn't use UUIDs or manifest files...
 
The ${vm_dir}/images path only exists if you used the vm image ... commands to create and provision images. But you don't have to use it. So I don't have that directory either.

Code:
     vm image list
     vm image create [-d description] [-u] name
     vm image provision [-d datastore] uuid new-name
     vm image destroy uuid
vm(8).

And vm-bhyve definitely uses uuid.
Code:
root@chibacity:~ # grep uuid /vm/sdgame02/sdgame02.conf
uuid="201092b4-5dcf-11f0-9df5-0cc47a183b68"
 
And vm-bhyve definitely uses uuid.
you are right. I just never saw it output any UUID and never really looked at the config files after deplomyent (or just ignored the UUID it adds to it).
I just looked at the 'image' command(s) in the manpage and realized I never used that in all those years...
Sorry for the noise (but still: this might be worth a PR against the port or upstream project to add it as a feature)
 
SirDice
Thank you I learned something new today, I guess this is what happens when you don't read the manpage enough.

To be honest, like sko , I never saw this option before, there is a chance I don't need this feature but it's good to know that it exists.
After reading the manpage and a part of the script `vm-zfs`, please tell me if I am wrong, `vm image` is useful when one needs to make available a customized VMs as a model? is it the main goal?

In any case, thank you for sharing hpnothp
 
After reading the manpage and a part of the script `vm-zfs`, please tell me if I am wrong, `vm image` is useful when one needs to make available a customized VMs as a model? is it the main goal?
The general idea is that you create a custom image (basic OS install plus some provisioning tooling perhaps), then use those images to quickly deploy new VMs based on that image.
 
After learning more about that feature (new for me), I did few tests and it's probably a better way to do what I was doing before with clone and/or git.
So I was able to try the script and it looks fine.
I've been playing with NetBSD these days so I made 2 images from one VM to see how it's done:
Code:
~ > time ~/bin/vm-iml.sh
UUID                                  NAME     CREATED          SIZE_MB  DESCRIPTION
 4867866d-6979-11f0-91c1-18c04d800381  nbsd10c  20250725-190327      106  netbsd_10_1_fresh_install
 0cc4a2ae-6a3e-11f0-8fa7-18c04d800381  nbsd10c  20250726-183158      116  netbsd_10_1_configured
~/bin/vm-iml.sh  0,01s user 0,02s system 127% cpu 0,023 total
 ~ >
 ~ >
 ~ > time vm image list
UUID                                  NAME     CREATED                           DESCRIPTION
0cc4a2ae-6a3e-11f0-8fa7-18c04d800381  nbsd10c  sam. 26 juil. 2025 18:31:58 CEST  netbsd_10_1_configured
4867866d-6979-11f0-91c1-18c04d800381  nbsd10c  ven. 25 juil. 2025 19:03:27 CEST  netbsd_10_1_fresh_install
vm image list  0,12s user 0,02s system 124% cpu 0,110 total
 ~ >

* it's indeed quicker than the original solution.
* seeing directly the size of images is nice.

That been said I agree with sko, hpnothp you should try to submit a pull request to the main project and see what they say, it's a win win situation for you and for the project.
 
You have to be aware of certain issues with images though, you need to delete the SSH host keys for example before creating the image, or else all your VMs will have the same SSH host keys. There are a few other things you need to clean up too.
 
You have to be aware of certain issues with images though, you need to delete the SSH host keys for example before creating the image, or else all your VMs will have the same SSH host keys. There are a few other things you need to clean up too.
Yep, I'll need to make a script to change few things, passwords, hostname, ssh keys, IP and few other things probably (there is no real danger because everything stays on my local network).
If I remember correctly I saw something like that in my past on Linux KVM, I think a virsh tool was doing that If my memory is doing well.
Anyway, thank you for the heads up, I still can't believe I missed that feature and it was right there all this time ...
 
Back
Top