OK, so maybe I'm not a complete noob, but a 20+ year hiatus means I definitely don't have any claim to a veteran card - I'll accept the noob label for now. I'm not writing this as a how-to for anything in particular - it's more intended as a list of gotchas and/or points of confusion I encounter during my cutover from a Windows-based homelab to FreeBSD in the closet + Linux clients. Maybe it'll help someone. Maybe not...
Starting point:
Phase I: migrate Windows VMs so I can tear down and rebuild
Starting point:
- Supermicro X11DPH-I w/ 128GB RAM & 10 enterprise nvmes: 6x 1.92TB and 4x 960GB.
- Windows Server 2019 file server, also running Hyper-V
- Hyper-V guests include a Windows Server 2016 DC and a Windows Server 2013 Exchange server, along with a couple of useless Windows VMs that are only good for testing purposes
- enough spare parts kicking around that I can cobble together another couple of machines, at need
- Supermicro-based server will gain an LSI SAS HBA (8i) and 8x 14TB HDDs
- FreeBSD will run on this hardware
- the new giant HDD pool (I'm thinking one big pool of raidz3) will allow me to re-rip all my Blu-ray media without compression or scaling compromises and store it in native format
- VMs will be retired, eventually, in favor of a set of jails running the services needed for a small OpenLDAP + Kerberos auth system, a DNS server, and jellyfin.
- Maybe a small family mail server to replace the Exchange functionality I have, too, if I'm bored and feeling like a new adventure...
- A backup/transition server has been assembled using an Intel i7-7700 and Z270 motherboard, along with 1x 500GB nvme and 4x 120GB Samsung EVO 850 SSDs.
- Long term this will run secondary DNS, LDAP, and Kerberos jails...
- Short term I'm getting the Windows VMs up and running on bhyve on this machine
Phase I: migrate Windows VMs so I can tear down and rebuild
- I installed FreeBSD on the backup machine using the single nvme drive (only 1 m.2 slot on my MSI Z270-A Pro) - no redundancy is a risk for zroot, but I blew my budget on 8x 14TB HDDs. It'll have to do.
- I created a zpool using the 4 SATA SSDs, configured as a striped pair of mirrors - total reported storage is roughly 220GB, which should be sufficient for VM hosting.
- I created a dataset called "pool/bhyve" to hold all the VMs under a single snapshottable umbrella
- Problem #1: vm-bhyve couldn't find the mount point for pool/bhyve when I ran
vm init
. - Noob issue, for sure... by destroying the pool and recreating it with
canmount=off
andmountpoint=none
and then creating the dataset pool/bhyve with an explicitmountpoint=/bhyve
, the problem was resolved.
- Problem #1: vm-bhyve couldn't find the mount point for pool/bhyve when I ran
- I created a dataset called "pool/bhyve" to hold all the VMs under a single snapshottable umbrella
- Since I wasn't sure I'd have RDP access to ported VMs on first boot, I decided I needed a local VNC viewer. To do this, I needed a window manager. Which display server to use? Xorg is older than I am (actually no, that's a lie, but it's old). I thought Wayland might be fun to try.
- I followed the FreeBSD handbook for installing Wayland and Wayfire, along with the forum advice to make sure
dbus_enable="YES"
exists in /etc/rc.conf.- Problem #2: don't overthink things. Messing around with the wayfire config file was a bad idea. Just by uncommenting the
[output]
block it caused Wayfire to launch with a background and nothing else. The handbook makes it look like you need this in wayfire.ini, but that's wrong.- After a bit of grief, I decided to start wayfire using an exact replica of wayfire.ini as it exists in /usr/local/share/examples/wayfire. NO changes made. It works!
- Problem #3: my USB mouse wasn't working... no button action, scrolling, or mouse pointer movement
- Some more digging, and I stumbled on the solution: delete any reference to moused from /etc/rc.conf. After doing this, the mouse works perfectly.
- Problem #2: don't overthink things. Messing around with the wayfire config file was a bad idea. Just by uncommenting the
- Wayfire now working, I installed Firefox and TigerVNC.
- I followed the FreeBSD handbook for installing Wayland and Wayfire, along with the forum advice to make sure
- FreeBSD now had storage ready to go for VMs and a working window manager, so it was time to try moving over a VM.
- I decided to use vm-bhyve, since it seems pretty straightforward.
- I created a public bridge and attached my ethernet interface to it - 20+ years ago this would have been so much harder!
- I created a copy of the windows VM template and edited for my needs (nvme drive spec), making sure my template lined up with the how-to.
- I installed the virtio-net drivers in one of my scratch VMs while it was still running on Hyper-V
- I shut it down, exported it, and copied the hard disk file over to the VM directory on my FreeBSD machine.
- Problem #4: Attempting to use qemu-img to convert from vhdx to raw failed - while the qemu docs indicate that vhdx is a supported file format, I found otherwise.
- The solution was to use the Hyper-V disk edit tool to convert the exported vhdx to vhd, and then copy the vhd to FreeBSD. qemu-img was able to successfully convert this to raw format.
- Problem #4: Attempting to use qemu-img to convert from vhdx to raw failed - while the qemu docs indicate that vhdx is a supported file format, I found otherwise.
- I deleted the default disk0.img file that
vm create
put in the dataset, and pointed the .conf file for the machine at the new raw file I'd just created, instead.- Curiosity: while an
ls -lsa
command showed a reasonable file size for the .vhd file that was copied over, the raw file had obnoxiously large file size.- This caused a brief moment of panic - did I just use my entire SSD pool for a single file? No...
zfs list
showed that the data set referred to a reasonable ~18GB of data. To check this I deleted the raw file, and the Refer column went down to ~9GB. Re-create the raw file, and it goes up to ~18GB. Thels
command still shows 128GB file size. WTH? - Michael Lucas' books on ZFS warned about this, but I forgot... standard file/directory management tools can give wonky results on ZFS! I chose to believe
zfs list
's Refer column and moved on...
- This caused a brief moment of panic - did I just use my entire SSD pool for a single file? No...
- Curiosity: while an
- OK, all set... I used
vm start
to fire up the machine, and watched the 4 piddly cores on my i7-7700 get some load for the first time in many years.- After a minute or so I connected using TigerVNC, and it works perfectly! I'd venture to say that performance of the VM actually feels snappier running on bhyve than it ever did when it was on Hyper-V on the Supermicro machine (dual Xeon 8156s, to be replaced with ebay special dual Xeon 8260s when I rebuild).
- Pre-installing the virtio-net drivers was inspired (imho) - all I had to do was go into the network config and give the machine back its static IP, and everything works as it should.
- After a minute or so I connected using TigerVNC, and it works perfectly! I'd venture to say that performance of the VM actually feels snappier running on bhyve than it ever did when it was on Hyper-V on the Supermicro machine (dual Xeon 8156s, to be replaced with ebay special dual Xeon 8260s when I rebuild).
- That was surprisingly easy, all things considered.
- I decided to use vm-bhyve, since it seems pretty straightforward.
- I migrated a 2nd VM just because I didn't want to go to bed, yet...
- Windows shares will be disabled during the rebuild, but that's ok because Windows clients are being replaced, too. I have enough external HDD storage to temporarily store everything important until I have NFS sharing up and running.
- waiting on a new case to show up that'll fit all my new HDDs...
- rebuild clients with Linux variants
- decommission Windows VMs
- add jailed backup services to the i7-7700 machine (DNS, LDAP, Kerberos)