Bhyve management script

I've written a simple(ish) bhyve management script that some people might find useful. Only supports FreeBSD guests at the moment although that's easy enough to fix with a few spare hours. It's also my first shell script that's longer than about 10 lines (containing more than just a few rsync(1)/mysqldump(1) commands) so it's probably incredibly messy to a sh(1) expert.

I've looked at other management scripts (mainly iohyve & bhyveucl) but they didn't really suit me. iohyve is designed purely for ZFS, which limits its usefulness. It also uses properties to store configuration. This is ok, and as a "port" of sysutils/iocage, its primary design feature is using ZFS & ZFS properies, but I find it gets messy very quickly. Even with simple functionality you end up with loads of ZFS properties and my general view is that one way or another, the way forward for bhyve is to have everything about the vm set in a config file. The command structure of iohyve is really nice (mines similar), but the other niggle I have is that it creates a ZFS dataset for every ISO file(?).

With bhyveucl, I found the configuration really awkward. The sample files contain god knows how many dozens of lines just to configure networking. I wanted something that could be run with the bare minimum of config.

I did originally plan to use UCL (it's much nicer having stuff like interfaces in an array rather than network0="", network1="", etc), but my awkward and terrible development set up left me using only what was in base. So I ended up making overzealous use of sysrc() and storing all config in rc style files. In a way it's nice that nothing needs to be installed.

I'd never actually run bhyve before doing this (just an avid follower). After writing about 300-400 lines I moved the script to the only bhyve capable machine I can play with and tried it. I was quite surprised that I only had to change a line or two to get everything working, and that bhyve works incredibly well.

Anyway onto the script, and how to use it:
  1. Make sure vmm is unloaded. Currently I check for vmm and do my own init if it isn't loaded, so my init won't get done if you load it manually or through /boot/loader.conf.
  2. Install the script at /usr/local/sbin/vm and make root executable
  3. Create a directory to house all vm data
    Code:
    mkdir /vm
    Use whatever directory you like, although I will use /vm throughout the examples so put in your own directory as needed.
  4. Add the following to /etc/rc.conf
    Code:
    vm_enable="YES" # enable vm
    vm_dir="/vm" # vm directory
    vm_list="" # space separated list of machines to start on boot
    vm_delay="5" # delay between starting machines
  5. Run the init command which will load all needed modules and create the rest of the directory structure
    Code:
    # vm init
  6. Create /vm/.templates/default.conf
    I've designed the system to use templates for the configuration. This way you can have multiple templates (eg freebsd-small.conf, freebsd-web.conf, etc) and use the relevant template when creating virtual machines.

    Put the following in it
    Code:
    cpu=1
    memory=256M
    network0_type="virtio-net"
    network0_switch="public"
    When a vm is created using the default template, it will have 1 cpu, 256MB of ram and one network interface, connected to the public switch (more on that in a minute)
That's basically it for setting up the environment.
Now to actually using it
  1. I've taken a "vmware style" system with the networking in that you create virtual "switches", then tell each vm which switch to connect to. In the default template I specified a switch called "public", so lets create it:
    Code:
    # vm switch create public
    Lets say em0 is an interface on our host connected to our lan, so we want to connect our virtual machines to that:
    Code:
    vm switch add public em0
    Now list the switches to see the config:
    Code:
    # vm switch list
    NAME                IDENT                     VLAN PORTS
    public              bridge0                      - em0
    This has basically created a bridge and added em0 to it
  2. Download a FreeBSD iso to install from
    Code:
    # vm iso ftp://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/10.1/FreeBSD-10.1-RELEASE-amd64-disc1.iso
  3. Create our virtual machine
    Code:
    # vm create test [OR]
    # vm create -t default -s 20G test
    Both those commands basically do the same thing. Without specifying a template, default will be used, and a 20G disk will be created if no size given.
  4. Install the OS
    Code:
    # vm iso
    FILENAME
    FreeBSD-10.1-RELEASE-amd64-disc1.iso
    # vm install test FreeBSD-10.1-RELEASE-amd64-disc1.iso
    (I only use vm iso there to list the ISO name for easy copy/paste).
    This will wait for bhyveload to make sure it runs correctly, then will exit once bhyve actually starts. You can now connect to the console to complete the install
    Code:
    # vm console test
    (This uses cu() so press ~+Ctrl-D to exit)

    Once the OS is installed and the machine reboots, it should boot straight into FreeBSD. The script takes care of re-starting bhyve after a guest reboot.
  5. To shutdown, either connect to the VM and shut it down normally, or run one of the following from the host:
    Code:
    # vm stop test
    # vm stopall
    The first will send the shutdown command to the guest and exit immediately, the second will wait for the machines to stop. (The second is used in my rc script just in case FreeBSD needs my script to actually wait until bhyve has finished before I exit)
  6. You can start a machine once it's installed by just running
    Code:
    # vm start test
    Once it's running, using the same console command as above to connect to it.
Let me know how it goes if you try it or have any questions. I may have forgotten something. It works pretty well on my dev machine but hasn't had a massive amount of testing. I also have an rc script for boot/shutdown but I'll leave that until I actually get a chance to test it myself first.

Disclaimer: Use at your own risk; All testing was done on 10.1-RELEASE. It shouldn't do anything harmful to your computer but as mentioned, it's my first "serious" shell script and was mainly thrown together over a couple of days.
 

Attachments

  • vm.txt
    20.7 KB · Views: 593
Last edited by a moderator:
I've attached a new version of the script with some tidying/small fixes and basic support for Ubuntu/CentOS guests.

Also attached are some sample templates for the 3 supported systems (FreeBSD/Ubuntu/CentOS) which now contain some extra required configuration (mainly hdd location/type). Any existing virtual machines will need their config file updated to add the following. It now supports multiple hard disks, with all disks (including the original first disk) specified in the configuration file. The old version was hard-coded to look for disk0.img
Code:
disk0_type="virtio-blk"
disk0_name="disk0.img"

CentOS was a bit of a pain. The kernel to load needs to be passed in the boot loader. To allow for this I have specified the kernel version in the template. This is then passed into grub-bhyve to load the kernel. In the attached template I have specified
Code:
linux_kernel="3.10.0-229.el7.x86_64"
This works for the version of CentOS I tested, CentOS-7-x86_64-Minimal-1503-01.iso, but this may obviously be different for other versions of CentOS. I'd be interested if anyone knows a better way of handling this.

The virtual switches also support VLANs. If you have a VM that wants to communicate privately via VLAN 10, over physical interface em0, you can do the following:
Code:
# vm switch create private
# vm switch vlan private 10
# vm switch add private em0

Then put the following in the vm configuration file to add a second interface on this switch:
Code:
network1_type="virtio-net"
network1_switch="private"
(If the VM only needs access to this network you could just change network0_switch and leave it with one interface)

I've also attached an rc.d script for anyone that wants to try it. Install it as /usr/local/etc/rc.d/vm. On shutdown it should wait for all machines to fully stop, although I'm not sure if that's necessary or not (or whether FreeBSD would wait anyway)?
 

Attachments

  • vm.txt
    26.5 KB · Views: 462
  • default.conf.txt
    138 bytes · Views: 338
  • ubuntu.conf.txt
    137 bytes · Views: 372
  • centos.conf.txt
    175 bytes · Views: 390
  • vm.rc.txt
    402 bytes · Views: 378
Last edited:
  • Thanks
Reactions: Oko
I've now moved the code for this onto Github. Easier to maintain and can be viewed/downloaded without needing a forum account. I also started to need some way to keep track of changes.

https://github.com/churchers/vm-bhyve

Not expecting anything big to come from it (for a start the method of running bhyve will likely change in the near future when EFI comes) but I think it has some neat ideas such as the switch commands and rc.conf/rc.d integration.
 
Just to update this, I've done quite a bit of work on vm-bhyve recently and am quite happy with the current feature set:
  • Out-of-the-box support for FreeBSD/NetBSD/OpenBSD*
    Linux - Alpine/CentOS*/Ubuntu/Debian
  • Virtual switches support both VLAN & NAT, enabled with a simple option.
    NAT automatically assigns a private network range for the guests, enables pf NAT forwarding and uses dnsmasq to provide DHCP on the virtual switch.
  • ZFS support can be enabled/disabled. If enabled, a ZVOL can be used for the disk device if desired.
  • rc.d integration with configurable guest list / start-up ordering
  • Ability to send machines to an image file, then create new machines from the image (if using ZFS)
  • Log file inside guest directory, detailing what vm-bhyve is doing when guests start/restart/stop.
  • Ability to enable pass-through devices. (Adds bhyve options exactly as in the documentation, although I personally have no way of testing this)
I'm just waiting for the UEFI stuff to finally appear in the open. Hopefully that will help clean up the way non-FreeBSD guests are booted (see below for guests that are problematic), and provide support for even more guests, possibly even Windows.

*Without UEFI, OpenBSD & CentOS are a bit of a pain, as the commands I need to pass into the bootloader include version numbers. These numbers have to be specified in the guest configuration file. See the example templates for these guests.
 
Hello.

I tried sysutils/vm-bhyve, but I get stuck quite quickly.
I did the following.

installed sysutils/vm-bhyve.
created a zfs dataset
# zfs create storage/vm
Then I edit /etc/rc.conf
Put the following two lines in there.
Code:
vm_enable="YES"
vm_dir="zfs:storage/vm"
Then I give the init command
# vm init
But then I get an error

Code:
root@srv-01:~ # vm init
cat: /storage/vm/.config/switch: No such file or directory

What am I missing?
 
Last edited by a moderator:
Hi Sylhouette
Unfortunately that's a small bug that wasn't noticed until the port was committed.
You can safely ignore that message. It will disappear as soon as you create at least one virtual switch
Code:
vm switch create public
 
Ok I just gave it the command, and now the error is gone and the other dirs are created.

One more question, do I need the cloned_interfaces="brigde0 tap0" lines in /etc/rc.conf?
Or is the vm switch create public command taking care of that?

Thanks for your time.
 
Last edited by a moderator:
You don't need to create any bridges or other interfaces manually.
The switch commands create bridge interfaces (and this is done automatically by vm-bhyve on boot). Tap devices are created dynamically when you start a virtual machine
 
If you're using vm-bhyve then once a vm is running you just run the following to connect to the guest console.
Code:
# vm console guestname
(This actually just uses cu internally)

There is currently no VGA console support in bhyve, so there is no graphical console at all.

Windows is supported in bhyve, but it's a bit hacky. As there's no graphical output you need to create an unattended installation ISO for Windows, then let that run and install Windows blind*. Once installed, the Windows serial console provides enough functionality to find out the guests IP address, at which point you can RDP directly to the guest. (The bhyve Windows instructions provide information to create a Windows install that has RDP enabled by default and gets an IP address via DHCP).

*I say blind but you can actually see a lot of what is happening during install via the Windows serial console.
 
Hi there. Thank you for answering. I'm trying to install SmartOS inside a zpool that I created for the SmartOS, is that possible if yes how?
 
I'm always encountering this problem
Code:
hazz% sudo vm console smart
/usr/local/sbin/vm: ERROR: smart doesn't appear to be a running virtual machine
hazz% sudo vm list
NAME            GUEST           CPU    MEMORY    AUTOSTART    STATE            
smart           smartos         2      2G        No           Stopped
 
SmartOS is a bit of an awkward one. Did you get the details to run it from this support issue?
https://github.com/churchers/vm-bhyve/issues/16

SmartOS always needs to be started in install mode, as it cannot be booted from hard disk - it always needs to boot off the ISO.
Does the VM actually start if you run the following?
# sudo vm install smart smartos-iso-filename.iso

If not, show what it being written to $vm_dir/smart/vm-bhyve.log.
 
It says unsupported guest type smartos.
I was able to run it with the normal program bhyve, but the console under FreeBSD is not defined.
 
I don't currently have a guest type called smartos. I'm trying to move away from building all the guests directly into vm-bhyve as it's impossible to support everything internally. I also think that, other than testing, it makes more sense for SmartOS to be used on raw hardware, so I've not been hugely concerned with adding full support for it.

If you look in the issue link from above, you need to use a template similar to windows.conf, with the following changes:
Code:
guest="generic"
uefi="csm"
hostbridge="none"
comports="com1 com2"
You then need to run it in install mode each time as SmartOS is designed to always boot from ISO or network.
 
Back
Top