random Bhyve rant

I post this rant only as more convenient form, then using Twitter for it, as @bhyve_dev asked me to do so.

The rant (part1): https://mobile.twitter.com/vermaden/status/893093952643051522
BHYVE setup with various networking other then LAN is PITA, using PF to NAT to wlan0 (WIFI) or tun0 (WWAN).

The rant (part2): https://mobile.twitter.com/vermaden/status/893559439365689344
About 'desktop' bhyve - non UEFI boot of anything is also PITA ... and UEFI booting is often broken for most things. Windows works thru ...

Now specifics ...

The FreeBSD Handbook (as good as it really is) does not cover how to connect Bhyve vms to the world when You use WLAN (WiFi) or WWAN (3g/4g) connection, its only for LAN:
https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/virtualization-host-bhyve.html

Making network for vms work with Bhyve over WLAN or WWAN requires to use PF with NAT over them, and reload these rules everytime You change connection ... I managet to get that working but its real PITA ...

I would definitely prefer VirtualBox approach, where I only have to put vboxnet_enable=YES into /etc/rc.conf and all 'NAT' cases are covered with this, on LAN, on WLAN and on WWAN. Some may say 'just juse VirtualBox', I would like to, but its not stable on FreeBSD ...

The other 'feature' that keeps me away from using Bhyve after trying it is its 'setup' for each system type. Bhyve would be great if I could just fire a command like in VirtualBox or QEMU, with options for how many cores, memory, disks and iso/cdrom. But with Bhyve I need to 'mess' woth some grub fork just to boot Linux Live CD or use 'special' UEFI firmware just to install Windows, and only in 'jerky' VNC window ... it gets better as Windows is installed and RDP can be used, but that is not the case for Illumos/Solaris/Linux systems ... Why not take QEMU/KVM approach and just fire screen using SDL? Why only this 'jerky mouse' VNC? VNC may be great for 'cloud-like' setups when You can use VNC over the browser directly, and that good, but for 'local' and 'install' purposes its really PITA.

Regards,
vermaden
 
  • Thanks
Reactions: Oko
In all fairness bhyve is very new and isn't really positioned as a desktop virtualisation solution (At least at the moment and if the devs want to try and support that sort of use there's a lot of work still to do...). VGA support has only recently appeared, and that's basically because it was the quickest way to allow users some way of getting to the graphical console so that stuff like Windows can be installed without jumping through hoops.

Also bhyve by itself is basically just a virtualisation engine, handling the raw hypervisor functionality. As the code matures hopefully we'll get better tools that run above bhyve to manage networking, etc. At the moment it's like complaining that playing and managing music is difficult when you're trying to just use mpg123 on the command line.
 
Yes Bhyve is young and I do not write these things in demand like 'gimme that because others have it', I say it to indicate problems and inconveniences of current Bhyve state that needs addressing in the future.

Also while VirtualBox is definitely 'old and mature' product it still lack full stability on FreeBSD and occasionally hangs.
 
Bhyve developers should also 'learn' from OpenBSD folks because while VMM in OpenBSD is about two years (?) younger it already has live migration support in its current very early state and even in the 'ZFS naming scheme' of send and recv over SSH which I very admire. Its also pity that two BSD licensed operating systems create two separate BSD licensed projects instead of joining the forces for one standardized solution, but we know world isn't perfect so people would argue about many things to do them their own way.
 
Firstly; I'm all for criticism, no matter how direct, opinionated, or even plain wrong. At the end of the day, this is an open-source project and the success or failure of it is dictated by users wanting to help make it better. If you feel that bhyve doesn't meet your needs, there are plenty of fine alternatives.

To address your points -

>and UEFI booting is often broken for most things.

You didn't get into specifics on this one but I'll take a stab at it: some UEFI guests use non-standard boot paths (Ubuntu) and bhyve doesn't currently save the UEFI non-volatile vars recording this fact. There is a fix in progress to store the non-volatile vars.

Please le me know what the list of "most things" is, and I will make sure they are fixed by the above work.

>Why only this 'jerky mouse' VNC?

Try VNC to any hypervisor implementation using a PS2 mouse (which reports relative coordinates) and you will see the same thing - it isn't limited to bhyve.

If a guest supports the XHCI tablet (reporting absolute coordinates), everything works fine.

>Why not take QEMU/KVM approach and just fire screen using SDL?

SDL isn't in the base system. If someone wants to go the effort of putting in an X front-end, all power to them - the underlying mechanics are available in bhyve.

>'special' UEFI firmware

All UEFI firmware is 'special'. KVM/Qemu and VirtualBox also have their own 'special' UEFI firmware.

What bhyve doesn't quite have is a single version of UEFI that supports both BIOS boot with graphics, and also UEFI boot. The port has 2 separate builds, and they are yet to be integrated. I believe that once that work is done

>Bhyve developers should also 'learn' from OpenBSD folks because while VMM in OpenBSD is
>about two years (?) younger it already has live migration support

It doesn't support live migration - only static save/restore, and not in a release. There has been a bhyve project for save/restore just requiring integration, and it will be modified to support live migration.

Peter.
 
Yet more haste fixup:

>The port has 2 separate builds, and they are yet to be integrated. I believe that once that work is done

... once that work is done, you will have a Qemu-style user-experience where BIOS-based guests will fall back from EFI boot and come up in VGA text mode.

Peter.
 
Bhyve developers should also 'learn' from OpenBSD folks because while VMM in OpenBSD is about two years (?) younger it already has live migration support in its current very early state and even in the 'ZFS naming scheme' of send and recv over SSH which I very admire. Its also pity that two BSD licensed operating systems create two separate BSD licensed projects instead of joining the forces for one standardized solution, but we know world isn't perfect so people would argue about many things to do them their own way.

I upvoted your original post as it is refreshing to see an opinion from a veteran open source guy like you. Unfortunately, this forum is full of noise. As somebody who is familiar with BSD ecosystem as long as you if not longer I feel that I need to challenge some of your claims. I make no claim that they are free of OpenBSD bias.

BSD operating systems are at this point so far apart with such different group chemistries that even something as simple as cross OS bug fixing is a challenge. IIlja van Sprundel gave a wonderful presentation on the recent DEF CON which speaks volumes

https://media.defcon.org/DEF CON 25/DEF CON 25 presentations/DEFCON-25-Ilja-van-Sprundel-BSD-Kern-Vulns.pdf

Expecting that FreeBSD and OpenBSD guys work on a common thing is a bit naive. Frankly, I feel that both VMM and bhyve are ill conceived projects.


As an avid OpenBSD user I felt that VMM which is the youngest of the two is making my beloved OS unnecessary complicated and cumbersome with minimal benefit. As I user I always felt that I would benefit far more from having sysjails than from full blown OS level virtualization. Kristaps Johnson taught us that Jails are not safe (later was discovered that sysjails were neither so they got killed)

http://www.nycbsdcon.org/2006/speakers.html#Johnson

although convenient as I can attest as a consumer of FreeBSD jails. They suffer the same network problems like bhyve in more realistic deployment scenarios

https://savagedlight.me/2014/03/07/freebsd-jail-host-with-multiple-local-networks/

I still feel that there is some hope for OpenBSD jail like system as we can read BIND Broker by tedu

https://www.tedunangst.com/flak/post/bind-broker

VMM are reality whether I like it or not. I tried them and they feel very much Xen Dom0 like. For me that is a good thing. Xen Dom0 (Alpine Linux) is my favourite hypervisor. I think that one of developers motivation was that Qemu even without kernel acceleration is moving into Linux only direction.

I am very familiar with VirtualBox and KVM. VirtualBox is desktop virtualization. KVM is more classical level 2 type Hypervisor. I would not run a server in the VirtualBox but I concur that it is very useful for a web developer who must test his product on multiple OSs and browsers. VirtualBox and Xen are as far apart as it gets so VMM are not really useful for somebody who needs VirtualBox. FreeBSD is not officially supported host for VirtualBox and my personal experience confirms that. I would not run VirtualBox on FreeBSD.

KVM is ok for server deployment but lacks hot migration comparing to Xen and even more think like block device provisioning where you can directly pass not just HDD but also other things like GPU computing cards directly to Xen host. I think that Red Hat requires now subscription for KVM Windows hosts (please see 7.4 below the release announcement) which means that I Xen will soon be my only option for Windows server as a virtual host.


Unlike OpenBSD I feel that FreeBSD project has bet its entire future on the a super cumbersome, patent incumbent file system ZFS which required reimplementation of the large part of Solaris kernel. FreeBSD is massively larger than Open with many unfinished things (see my rant in the thread what I would like to see done differently on FreeBSD). Things actually worked for FreeBSD and I must admit that I am heavy ZFS users and most large data people I know (I know quite a few) swear by Free (both BSD and NAS). FreeBSD decision is further vindicated by the recent Red Hat admission that BTRFS is a vaporware

https://access.redhat.com/documenta...4_Release_Notes-Deprecated_Functionality.html

That officially confirms what many of us knew for a quite some time know that Linux has no modern file system (although early 90s SGI creation XFS with both hardware and software RAID is supper stable). That leaves Solaris, FreeBSD, and DragonFlyBSD as the only three legit storage OSs. Oracle can of course always pull the plug on FreeBSD and DragonFly is minuscule (so much about your cooperation between Open and Free as FreeBSD kicked one of its most charismatic developers which recently was repeated with John Marino).

Why am I taking so much about ZFS when the topic is bhyve. Because just like with Jails, Bhyve are infinitely more useful combined with ZFS underneath even with all network limitations you pointed. Personally I have not given a Bhyve try as I am experimenting with various DomU options on Alpine Linux. As adverse as Linux is to the third party kernel modules ZFS kernel modules do exist for ZFS and Alpine Linux does support DomU installation on the top of ZFS pool. That seems to be winner for me.

Also speaking from my extensive experience with Jails. Jails by itself even combined with ZFS are not really practically useful without a tool like sysutils/iocell
which is on another hand maintained outside of FreeBSD proper (in the ports three) by a single developer.

In retrospect I think that all BSDs were way to late for Virtualization party. FreeBSD was too late in part due to interesting Jail concept so much championed by Solaris zones and poorly imitated with Linux containers (docker is another laughable "brake trough" of Linux community. Maybe only NetBSD got it right by porting mature Xen technology instead of developing its own hypervisor but due to the current sorry state of the BSD (the headline for the incoming 8.0 release is support for USB 3.0) I am not sure how well maintained is Xen on NetBSD. One thing for sure I would not use NetBSD in production for anything at this time when the future of the project is so uncertain.
 
>and UEFI booting is often broken for most things.

You didn't get into specifics on this one but I'll take a stab at it: some UEFI guests use non-standard boot paths (Ubuntu) and bhyve doesn't currently save the UEFI non-volatile vars recording this fact. There is a fix in progress to store the non-volatile vars.

Please le me know what the list of "most things" is, and I will make sure they are fixed by the above work.

I compare Bhyve to KVM/QEMU and VirtualBox here. On them You just put any ISO (QNX/Linux/Windows/ReactOS/...), you start the process with several cores and memory and it just starts to boot this other OS, with graphical screen, without need for VNC, You can try it, install it or just close the window and kill that vm. With Bhyve graphical console is only for UEFI, so even if You load quite new system like Ubuntu Linux after install it fails to boot, You need to mess with grub-bhyve or other things and without UEFI there is no graphical console. Its just PITA.



>Why only this 'jerky mouse' VNC?

Try VNC to any hypervisor implementation using a PS2 mouse (which reports relative coordinates) and you will see the same thing - it isn't limited to bhyve.

If a guest supports the XHCI tablet (reporting absolute coordinates), everything works fine.

I doubt that XHCI tablet would be working on QNX or Windows XP or even OpenIndiana (Illumos) ... so still a problem.

Also anyone had success with Windows XP on Bhyve?


>Why not take QEMU/KVM approach and just fire screen using SDL?

SDL isn't in the base system. If someone wants to go the effort of putting in an X front-end, all power to them - the underlying mechanics are available in bhyve.

These are also not in base system and it does not prevent You from using it ...
Code:
bhyve-firmware-1.0             Collection of Firmware for bhyve
uefi-edk2-bhyve-20160704_1     UEFI-EDK2 firmware for bhyve
uefi-edk2-bhyve-csm-20160704_1 UEFI-EDK2 firmware for bhyve with CSM

I do not force to use SDL, it can be something entirely new or something diffent, its just an Idea to have possibility to have graphical window for VM without 'jerky' VNC (not all will support XHCI).



>Bhyve developers should also 'learn' from OpenBSD folks because while VMM in OpenBSD is
>about two years (?) younger it already has live migration support

It doesn't support live migration - only static save/restore, and not in a release. There has been a bhyve project for save/restore just requiring integration, and it will be modified to support live migration.

Peter.
Lets call id Dead Migration then, but still first step in migrating a VM from one host to another.
 
I upvoted your original post as it is refreshing to see an opinion from a veteran open source guy like you. Unfortunately, this forum is full of noise. As somebody who is familiar with BSD ecosystem as long as you if not longer I feel that I need to challenge some of your claims. I make no claim that they are free of OpenBSD bias.
I remember when I first started to use (now non existing) bsdforums.org, I also made a lot of noise ;)

BSD operating systems are at this point so far apart with such different group chemistries that even something as simple as cross OS bug fixing is a challenge. IIlja van Sprundel gave a wonderful presentation on the recent DEF CON which speaks volumes

https://media.defcon.org/DEF CON 25/DEF CON 25 presentations/DEFCON-25-Ilja-van-Sprundel-BSD-Kern-Vulns.pdf

Expecting that FreeBSD and OpenBSD guys work on a common thing is a bit naive. Frankly, I feel that both VMM and bhyve are ill conceived projects.
At least its not yet illegal to dream.

As an avid OpenBSD user I felt that VMM which is the youngest of the two is making my beloved OS unnecessary complicated and cumbersome with minimal benefit. As I user I always felt that I would benefit far more from having sysjails than from full blown OS level virtualization. Kristaps Johnson taught us that Jails are not safe (later was discovered that sysjails were neither so they got killed)

http://www.nycbsdcon.org/2006/speakers.html#Johnson

although convenient as I can attest as a consumer of FreeBSD jails. They suffer the same network problems like bhyve in more realistic deployment scenarios

https://savagedlight.me/2014/03/07/freebsd-jail-host-with-multiple-local-networks/

I still feel that there is some hope for OpenBSD jail like system as we can read BIND Broker by tedu

https://www.tedunangst.com/flak/post/bind-broker

VMM are reality whether I like it or not. I tried them and they feel very much Xen Dom0 like. For me that is a good thing. Xen Dom0 (Alpine Linux) is my favourite hypervisor. I think that one of developers motivation was that Qemu even without kernel acceleration is moving into Linux only direction.

Yes, Jails are also great, the only thing I miss in them is 'live migration' to other FreeBSD hosts. This is where Solaris Zones shine, also SmartOS (Illumos distribution) has nice (free) Solaris Zones implementation with CPU Overbursting and other features described here in real world usage: http://containersummit.io/events/sf-2015/videos/wolf-of-what-containers-on-wall-street




I am very familiar with VirtualBox and KVM. VirtualBox is desktop virtualization. KVM is more classical level 2 type Hypervisor. I would not run a server in the VirtualBox but I concur that it is very useful for a web developer who must test his product on multiple OSs and browsers. VirtualBox and Xen are as far apart as it gets so VMM are not really useful for somebody who needs VirtualBox. FreeBSD is not officially supported host for VirtualBox and my personal experience confirms that. I would not run VirtualBox on FreeBSD.
I would use VirtualBox on server only if it would be rock stable, unfortunately on FreeBSD it is not. KVM is not bad, especially on OVIRT solution (open source project for upstream RHV - Red Hat Virtualization).

There is experimental Xen dom0 on FreeBSD, but its VERY experimental. If You want to run Windows or Linux or any other OS on your desktop without rebooting, then VirtualBox on FreeBSD seems to be least PITA solution.

There is also other product based on Xen dom0 - Oracle VM for x86, and its FREE (which is strange in Oracle world). Offers separate Oracle VM Manager and features like Live Migration, clonning, snapshots, using SAN network etc. Kinda open source VMware ESXi in many aspects. It also has great 'feature' for Oracle Databases, it is treated by Oracle as HARD PARTITIONING solution which helps You to cut costs on Oracle Databases licensing. On VMware You have to license all hosts under all vCenter managers ...





KVM is ok for server deployment but lacks hot migration comparing to Xen and even more think like block device provisioning where you can directly pass not just HDD but also other things like GPU computing cards directly to Xen host. I think that Red Hat requires now subscription for KVM Windows hosts (please see 7.4 below the release announcement) which means that I Xen will soon be my only option for Windows server as a virtual host.
KVM does support live migration, for RHV there is open source project called OVIRT and its totally free, its another 'open source VMware ESXi' product. You can also use KVM in OpenStack solution, but that also takes ages to jump in to (as a big project).


Unlike OpenBSD I feel that FreeBSD project has bet its entire future on the a super cumbersome, patent incumbent file system ZFS which required reimplementation of the large part of Solaris kernel. FreeBSD is massively larger than Open with many unfinished things (see my rant in the thread what I would like to see done differently on FreeBSD). Things actually worked for FreeBSD and I must admit that I am heavy ZFS users and most large data people I know (I know quite a few) swear by Free (both BSD and NAS).

Yes, FreeBSD project imports all possible 'useful' software that is available on mostly compatible license, also DTrace same from Solaris, but its not as heavy used as ZFS for sure.




FreeBSD decision is further vindicated by the recent Red Hat admission that BTRFS is a vaporware

https://access.redhat.com/documenta...4_Release_Notes-Deprecated_Functionality.html

That officially confirms what many of us knew for a quite some time know that Linux has no modern file system (although early 90s SGI creation XFS with both hardware and software RAID is supper stable). That leaves Solaris, FreeBSD, and DragonFlyBSD as the only three legit storage OSs. Oracle can of course always pull the plug on FreeBSD and DragonFly is minuscule (so much about your cooperation between Open and Free as FreeBSD kicked one of its most charismatic developers which recently was repeated with John Marino).

Yes, from the time of Poetteringation of Red Hat ecosystem everything in Red Hat Linux seems to go the wrong way, illogical way, bad way and that decision is also in that taste (to deprecate BTRFS).

As I digged the Internet why Red Hat made such decision, its was that 'they already had their internal XFS developers while they had NONE BTRFS developers' and that 'BTRFS is changing too quickly to be usable on release once a year version'. They also mention ZFS in their docs, but the only thing keep taking them away from ZFS is its license - CDDL. I can not understand now much Linux zealots cherish that bullshit GPL licence above technology, fortunately in BSD world the license is least interesting thing overlined in two/three sentences and BSD people can focus on technology without all this licensing bullshit.

After abandoning the BTRFS Red Hat will be 'doing' project Stratis, outlined here:
https://stratis-storage.github.io/StratisSoftwareDesign.pdf

In short words, they will use device-mapper and LVM2 with XFS on top of that and write some utilities on python to create 'pools' like things over this. It feels to me so much wrong and useless, but that is how enterprise works, You need features not matter what garbage is under the hood.

I also keep fingers crossed for HAMMER2 and I regret that people in FreeBSD project could not talk enought to continue work together without additional forks like Dragonfly BSD. But thats life.



Why am I taking so much about ZFS when the topic is bhyve. Because just like with Jails, Bhyve are infinitely more useful combined with ZFS underneath even with all network limitations you pointed. Personally I have not given a Bhyve try as I am experimenting with various DomU options on Alpine Linux. As adverse as Linux is to the third party kernel modules ZFS kernel modules do exist for ZFS and Alpine Linux does support DomU installation on the top of ZFS pool. That seems to be winner for me.

I once checked Alpine Linux, and as much I like the goals and features of this project, its sill Linux, which keeps me away from it. I sometimes use ZFS even on Linux systems that officially does not want to have ANYTHING common with it (like Red Hat Linux) and it works flawlessly.



In retrospect I think that all BSDs were way to late for Virtualization party. FreeBSD was too late in part due to interesting Jail concept so much championed by Solaris zones and poorly imitated with Linux containers (docker is another laughable "brake trough" of Linux community. Maybe only NetBSD got it right by porting mature Xen technology instead of developing its own hypervisor but due to the current sorry state of the BSD (the headline for the incoming 8.0 release is support for USB 3.0) I am not sure how well maintained is Xen on NetBSD. One thing for sure I would not use NetBSD in production for anything at this time when the future of the project is so uncertain.
Bhyve and VMM could and will change that as nothing new then KVM or VirtualBox havent been invented, so its only time to catch up ;)

Regards,
vermaden
 
>These are also not in base system and it does not prevent You from using it ...
>
>bhyve-firmware-1.0 Collection of Firmware for bhyve
>uefi-edk2-bhyve-20160704_1 UEFI-EDK2 firmware for bhyve
>uefi-edk2-bhyve-csm-20160704_1 UEFI-EDK2 firmware for bhyve with CSM

That's correct - an SDL front-end would be done the same way, as a port. bhyve provides the mechanisms for graphical i/o, so something would have to be written to use those.

Peter.
 
Back
Top