bhyve How to shrink Windows VM disk size?

Recently I migrated my Windows Server Essentials installation to a Bhyve img disk and it works great but the size is way too big and need to shrink it down to size. Not sure how to do it except thinking if I can attach a second blank disk to the vm I could run macrium reflect and image the disk down using macrium onto a smaller disk and then swap the new disk as disk0.

Would that work and if so how do you create a blank disk and in the conf file reference it attach it?

Thanks
 
Just wanted to update what I am trying to do. Last night was able to attach a blank img file disk to Windows Server VM and run Macrium clone tool against it to downsize the installation.

Macrium reported a success the next morning though Windows flagged the disk after with a red exclamation mark after!

Haven't been able to test weather it works or not as had to go to work but will test it tonight.

May run gpart restore to fix the disk if there are any geometry issues.

Will update this thread about my results later tonight.

Thanks.
 
in theory
windows can shrink fs and then partitions
so if the large partition is last you can shrink it
shoutdown the vm
truncate the image file
mdconfig the imagefile
repair backup gpt
unmdconfig
boot vm

if your vm disk backstore is zfs make a snapshot before
 
Alternative suggestion also assuming ZFS is used: Why not create a "sparse" zvol of the same size? Copy over the contents (dd), boot the VM from the new zvol, then issue a forced TRIM from inside, this should shrink the zvol to the size actually occupied.
 
I don't think ZFS was used. Still learning but my setup is using a ZRaid1 array with a /Zpool/Bhyve folder holding the VM image folders that have their own disk0.img and conf files.

Also looked at the partition map and did not find any partitions following the windows partition in disk manager at least. Will have to run diskpart in a command window to see more details.

Windows shrink function wouldn't shrink it all the way down unlike Macrium which shrunk it down even more but if I was to lets say use windows disk manager to shrink it to 600G I would have to take note of the new size and run truncate at slightly larger value?

Thanks.
 
Windows shrink function wouldn't shrink it all the way down unlike Macrium which shrunk it down even more but if I was to lets say use windows disk manager to shrink it to 600G I would have to take note of the new size and run truncate at slightly larger value?
yes something like this
you can use mdconfig -t vnode /path/disk0.img and then inspect /dev/md0 with gpart
 
my setup is using a ZRaid1 array with a /Zpool/Bhyve folder holding the VM image folders that have their own disk0.img
Then, independent of this question, I'd really recommend using a zvol instead of an image file. That should perform a bit better, as ZFS knows the dataset is intended to be used as a block device.

A "sparse" zvol gives you the additional benefit that it will only occupy as much space on your pool as needed for its contents (most likely trading in a bit of performance). Of course, the host can only know about blocks no longer used if the guest system actually uses TRIM, but that's pretty much the standard nowadays (without it, the zvol would never shrink). I personally use sparse zvols for all my bhyve vms. You don't really have to worry much, just set a sane "upper bound" for the size.
 
Then, independent of this question, I'd really recommend using a zvol instead of an image file. That should perform a bit better, as ZFS knows the dataset is intended to be used as a block device.

A "sparse" zvol gives you the additional benefit that it will only occupy as much space on your pool as needed for its contents (most likely trading in a bit of performance). Of course, the host can only know about blocks no longer used if the guest system actually uses TRIM, but that's pretty much the standard nowadays (without it, the zvol would never shrink). I personally use sparse zvols for all my bhyve vms. You don't really have to worry much, just set a sane "upper bound" for the size.

So zvol's perform better you say. No trim for me running off of four WD Red Pro 7200 rpd drives. Is it difficult to convert your VM to a zvol? I would have to research how as I have no idea how but enjoy the challenge. In anycase my method of using Macrium worked and was able to boot the VM at a quarter of the size it was before.

In anycase will get started on going with using a zvol. Want to back up my work though. Was also wondering whats the best way to backup a VM? I just copied the whole folder to another location on the hard drive.

Thanks
 
So zvol's perform better you say.
At least in theory, you avoid the "file" layer.
No trim for me running off of four WD Red Pro 7200 rpd drives.
This has nothing to do with the drives, of course you can't use TRIM on them. But you can use TRIM in your guest. The virtio-blk, ahci-hd and nvme emulations of bhyve support TRIM. When the backend is a sparse zvol, this will be used to actually shrink the zvol when blocks are marked unused.
Is it difficult to convert your VM to a zvol?
I didn't try. But if I understood you correctly, you currently use a "plain" image file? Then, just creating a zvol of the same size and using dd to copy the data over should work....
 
So right now I have a ZFS file formated /ZRoot/Bhyve folder. This is the default. Creating a sparse zvol in that Bhyve folder should not affect my other VMs?

Still researching how.

Thanks
 
Used truncate to create empty disk img and after attaching to vm used macrium to clone and that worked well. Thanks.
 
So I created a zfs zvol sparse device located in my /dev/zvol/zroot/bhyve/winsparse as I called it folder.
Then I used gpart to create a gpt sparse volume. Then I "DD if=disk1.img of=/dev/zvol/zroot/bhyve/winsparse BS=1m" command

I setup the conf to reflect a virtio-blk mode with custom type and referenced the path of the zvol device.

Upon booting Windows complains with a blue screen that its an inaccessible boot device.

Could it be because before I was running under nvme mode and did not have the virtio blk driver installed that
its crashing?

Thanks

Edit: Destroyed the disk in gpart and recreated it again. This time attached the sparse volume as a second disk to my vm. This required installing the Virtio Blk storage driver for the system to see the drive. Then used Macrium and once again after reconfiguring the VM conf settings to boot from the sparse image I get another inaccesible boot device!

How do I fix this??!

Thanks
 
Got it working. Trick is not to run gpart to create gpt volume. Just directly access the volume after you create it with zfs. Also benchmarked it. "Nvme" mode is 30-40 percent faster than virtio-blk mode. So much for that..
 
Found after benchmarking nvme emulation against the virtio-blk that the nvme mode was like 30% faster overall with write speeds 4X faster in nvme mode.

Not sure why.
 
Found after benchmarking nvme emulation against the virtio-blk that the nvme mode was like 30% faster overall with write speeds 4X faster in nvme mode.
Yes, found that was well meanwhile. Might just switch to nvme emulation then ...

Not sure why.
Probably the virtio-blk implementation isn't ideal 🤷‍♂️ in theory, not having to emulate anything should be the option that performs best.

Even worse than that, it seems benchmarks also show that plain image files perform better than zvols, another thing that should be the other way around. I'll still keep zvols because I really want that usage of (guest) TRIM to shrink sparse zvols....
 
Back
Top