bhyve: The FreeBSD Hypervisor

Status
Not open for further replies.

admin

Administrator
Staff member
Administrator
bhyve

The FreeBSD Hypervisor




Subscribe to our article series to find out more about FreeBSD in general.


FreeBSD has had varying degrees of support as a hypervisor host throughout its history. For a time during the mid-2000s, VMWare Workstation 3.x could be made to run under FreeBSD’s Linux Emulation, and Qemu was ported in 2004, and later the kQemu accelerator in 2005. Then in 2009 a port for VirtualBox was introduced. All of these solutions suffered from being a solution designed for a different operating system and then ported to FreeBSD, requiring constant maintenance.

Introducing FreeBSD’s bhyve


In the late 2000s NetApp was investigating using a hypervisor to run additional services on top of their FreeBSD based storage appliance. Later in the fall of 2010 at the MeetBSD conference in California, Peter Grehan and Neel Natu (NetApp employees at the time) started asking around about the potential of a type-2 hypervisor for FreeBSD.

A lot of interest was drawn so NetApp moved forward with the project, providing hardware and other resources a team of developers, with the goal of building a simple high-performance hypervisor to take advantage of the new hardware virtualization support offered by modern x86 CPUs.

bhyve was originally designed and implemented by Neel Natu, and a small team at NetApp grew the functionality over time. This project was eventually shelved, but the code was open sourced and contributed to FreeBSD in May of 2011. After that, it has seen continuous development ever since.

The Early Days of bhyve


As originally introduced into FreeBSD, bhyve required reserved resources. The system would need to be booted with a loader.conf tunable telling the kernel to not use the top X GB of memory, and then that memory could be used by a virtual machine. CPUs were pinned to a guest, and no sharing or oversubscription was possible.

Over time, bhyve grew out of these limitations and expanded to support AMD’s hardware virtualization in addition to Intel’s. You can learn more about the early days of bhyve from the BSDNow.tv interview with developers Peter Grehan and Neel Natu.

The Legacy-Free Hypervisor


One thing that set bhyve apart was being a “legacy free” hypervisor. What does this mean? Well, existing hypervisors like VMware, Virtualbox, and Qemu, emulate older physical hardware that is well supported by most every operating system. This has the advantage of working with older operating systems, and not requiring special drivers or modified versions of the operating system like early versions of the Xen hypervisor did.

bhyve took advantage of the new paravirtualized driver infrastructure known as virtio. These drivers commonly found in modern operating systems like FreeBSD and Linux eventually made their way into Windows as well. By avoiding the need to emulate physical hardware (especially older, lower performance devices), bhyve contained a great deal less code, and maintains high performance.

Another advantage of the legacy-free model, bhyve avoided a number of high profile vulnerabilities in the legacy device emulation code in other hypervisors, such as the qemu floppy driver vulnerability.

Taking Advantage of Para-virtualized Drivers


When bhyve was first created, the virtio specification was still in the early stages, but it was adopted for its minimal overhead and low complexity. FreeBSD currently supports version 0.9 of the virtio specification, originally for network and block interfaces, but later also adding support for SCSI, console, entropy, and memory ballooning.

Later versions of the virtio specification added a new functionality to the virtio block interface, allowing blocks to be “discarded”, effectively implementing an analog for the TRIM command used by modern filesystems to notify SSDs and other flash based media that certain blocks are no longer in use.

This allows the Flash Translation Layer (FTL) to better manage the flash memory and wear leveling. In the context of virtual machines, allowing the guest operating system to notify the hypervisor that a range of blocks on the virtual disk are no longer in use allows the host to reclaim that space on its underlying disks.

This can improve performance on VMs backed by SSDs, but also allows more effective use of space on the host. Without this feature, when delete is removed from the VM, the host is unaware that this space is no longer utilized.

This functionality was not included in virtio 0.9 that is used in FreeBSD both as a guest and as a hypervisor. Similar to ZFS, virtio uses a series of “feature flags” to communicate what functionality both the host and guest support. This is more expressive than a simple version number and allows individual features to be supported without needing to support all of the changes in newer versions of virtio.

Beyond bhyve’s Current Capabilities


In a recent commit to FreeBSD, Klara has extended the FreeBSD virtio driver and bhyve virtio backend to support the VIRTIO_BLK_T_DISCARD command. This change allows bhyve to accept TRIM commands from any guest OS that supports virtio discard. If the virtual disk is a block device, bhyve will translate those into TRIM or UNMAP commands for that device.

If the virtual disk is backed by a ZFS zvol, then the blocks will be freed, returning space to the pool. Future work remains to support freeing space when the virtual disk is backed by a file. This will require extending the FreeBSD VFS interface to support “hole punching”, or re-sparsing a file.

FreeBSD supports creating sparse files, which allows the user to create large files, and write to them at different offsets, without having to allocate the space for the entire file size while it remains unused. However, FreeBSD does not yet have support for freeing a range of an existing file, returning it to a sparse state after it has been used. Once this is implemented, the bhyve virtio backend can be extended to use this interface for file-backed virtual disks.

With this second commit, FreeBSD guests running in any hypervisor that uses virtio discard, will be advertising TRIM support via the emulated block device, and notify the hypervisor about any blocks that are no longer in use.

Both UFS and ZFS allows you to configure if and how they generate TRIM commands. This functionality improves the performance of FreeBSD in many public and private clouds that use virtio for storage.

What’s Next for bhyve?


There is a lot of consistent effort and active development in bhyve across the FreeBSD and illumos communities, including support for the 9P filesystem, save/restore, live migration, NVMe device emulation, and support for ARMv8 virtualization.

The virtio-9p driver allows a directory to be passed into a hypervisor, rather than using a block device. The save/restore feature will allow a VM to save its state to disk, and restore it later, allowing a guest to resume running after a reboot of the host. Building on save/restore, the live migration feature will allow a VM to move to a different host machine by saving and transferring the state, then progressively transferring the change in state that has happened since the original last snapshot.

The Non-Volatile Memory Express (NVMe) interface is a modern replacement for the typical disk interfaces used for spinning hard drives. Emulating this higher performance NVMe interface allows both the host and guest to take better advantage of multiple submission and completion queues and better scale to take advantage of multiple CPU cores.

Lastly, development to expand bhyve to support the virtualization features on the newer ARMv8 CPU architecture will allow FreeBSD to host VMs on architectures outside of just x86.

bhyve continues to grow new features and functionality and provide a high performance platform for virtualizing a wide array of workloads on top of FreeBSD.


You might also want be interested in

Improve the way you make use of FreeBSD in your company


FreeBSD is a strategic OS in many companies. At Klara, we guide companies and teams towards safe, whitepaper implementations of FreeBSD that enhance and improve the way the infrastructure is enabling your business.

Find Out More



More on FreeBSD




Virtualization at a glance
bhyve: The FreeBSD Hypervisor


You might know vmware workstation or virtualbox, but do you know about bhyve? Come learn more about bhyve and the improvements we brought it recently!

September 3, 2020September 3, 2020

Dummynet: The Better Way to Build FreeBSD Networks


Dummynet is the FreeBSD traffic shaper, packet scheduler, and network emulator. Dummynet allows you to emulate a whole set of network environments in a straight forward way, it has the ability to model delay, packet loss, and can act as a traffic shaper and policer. Dummynet is roughly equivalent to netem in Linux, but we have found that dummynet is easier to integrate and provides much more consistent results.

July 13, 2020

Continue reading...
 
Status
Not open for further replies.
Back
Top