Bhyve compiling NanoBSD

I want to build FreeBSD NanoBSD images in a Bhyve VM.
Can anyone speak to the speed hit I will take when compiling under a FreeBSD-CURRENT VM using FreeBSD 11.1 Bhyve host.

What is disk throughput like in a VM? Much speed loss from host disk thoughput?

I currently use a separate box for FreeBSD -CURRENT and want to eliminate that method.

Most bhyve tutorials use an img file to bootup the VM. Can I do this like a jail where the host can work on raw files inside the Jail.
I really don't want to work inside an image file but prefer a typical install file structure just like a Jail.
 
Well I will answer my own question. My NVMe runs faster on -CURENT in a guest-VM than on the host.
I am blown away. I was expecting half the host speed.

On the VM:
Code:
root@freebsd:~ # diskinfo -t /dev/vtbd0
/dev/vtbd0
        512             # sectorsize
        22548644864     # mediasize in bytes (21G)
        44040322        # mediasize in sectors
        32768           # stripesize
        0               # stripeoffset
                        # Disk descr.
        BHYVE-B5E5-5DEA-422F    # Disk ident.
        No              # TRIM/UNMAP support
        Unknown         # Rotation rate in RPM

Seek times:
        Full stroke:      250 iter in   0.044692 sec =    0.179 msec
        Half stroke:      250 iter in   0.041967 sec =    0.168 msec
        Quarter stroke:   500 iter in   0.075965 sec =    0.152 msec
        Short forward:    400 iter in   0.063788 sec =    0.159 msec
        Short backward:   400 iter in   0.067375 sec =    0.168 msec
        Seq outer:       2048 iter in   0.269278 sec =    0.131 msec
        Seq inner:       2048 iter in   0.066651 sec =    0.033 msec

Transfer rates:
        outside:       102400 kbytes in   0.052263 sec =  1959321 kbytes/sec
        middle:        102400 kbytes in   0.052917 sec =  1935106 kbytes/sec
        inside:        102400 kbytes in   0.055158 sec =  1856485 kbytes/sec

Same device on the host:
Code:
root@gigabyte:~ # diskinfo -t /dev/nvd0
/dev/nvd0
   512             # sectorsize
   512110190592   # mediasize in bytes (477G)
   1000215216      # mediasize in sectors
   0               # stripesize
   0               # stripeoffset
   569S105MT5ZV   # Disk ident.

Seek times:
   Full stroke:     250 iter in   0.035000 sec =    0.140 msec
   Half stroke:     250 iter in   0.041280 sec =    0.165 msec
   Quarter stroke:     500 iter in   0.052264 sec =    0.105 msec
   Short forward:     400 iter in   0.024977 sec =    0.062 msec
   Short backward:     400 iter in   0.033970 sec =    0.085 msec
   Seq outer:    2048 iter in   0.124744 sec =    0.061 msec
   Seq inner:    2048 iter in   0.125237 sec =    0.061 msec

Transfer rates:
   outside:       102400 kbytes in   0.147543 sec =   694035 kbytes/sec
   middle:        102400 kbytes in   0.078621 sec =  1302451 kbytes/sec
   inside:        102400 kbytes in   0.082174 sec =  1246136 kbytes/sec
 
This isn't quite a fair comparison: the block device in bhyve is backed by a caching filesystem, so there is a lot more opportunity for readahead. Imagine an nvme device with multiple GB of intelligent read/write cache :)

Try diskinfo on a zvol on the nvme device on the host: that will most likely be faster than bhyve.
 
Back
Top