Bhyve VMs Running very Slowly

Hi everybody,

I have been noticing an issue with my Bhyve virtual machines. They all are running very slowly. The VMs in question run Ubuntu 14.04 LTS and have the LAMP stack running on them. I notice that when I try to load web pages from these servers in my browser, they're very slow to load at times (not a network speed issue, I've ruled that out). When I try to access them via ssh, they'll be slow as well. For example I'll ssh into user@server and then the operation may take 30 seconds to a minute before I'm prompted for a password, and then another 30 seconds to a minute after that before I'm actually logged in to the VM. The VMs are all very slow, but have low cpu, memory, network, and I/O utilization. The Bhyve host system itself has plenty of free resources as well and ssh access to that is just fine so I don't believe the host is the problem.

I rebooted my VMs the other day and noticed that the speed issues went away briefly, but then a couple hours later the machines slowed to a crawl once again (slow web and ssh access), do any of you guys have a clue as to what might be causing this? I think Bhyve itself is the problem, but I'm open to any and all suggestions. Thank you all in advance, this is a very knowledgable community!
 
Do the VMs have reasonable ping times when performance appears to be slow ? Is there any activity from the VM during the SSH login (e.g. DNS lookups) ?

It may be worth doing some monitoring from within the guest to see if anything is happening there.

Also, if there is a lot of activity on the systems when the guests are idle, it is possible that guest memory could be paged out. Have a look at system swap stats to see if this is happening.
 
Do the VMs have reasonable ping times when performance appears to be slow ? Is there any activity from the VM during the SSH login (e.g. DNS lookups) ?

It may be worth doing some monitoring from within the guest to see if anything is happening there.

Also, if there is a lot of activity on the systems when the guests are idle, it is possible that guest memory could be paged out. Have a look at system swap stats to see if this is happening.

Thank you Grehan. Ping times are fine and DNS lookups aren't an issue. Upon reboot the vms run well, but then after a while slow down again. I'll have a look at the paging and swap stats to see what's going on although the host system has 16GB of memory and a total of less than 3GB has been allocated to the VMs combined.
 
I've checked the paging swap and I don't think the memory that the vm's are using is being swapped out. Here's the output for swapinfo:

Code:
[user]@[machine]:[path] # swapinfo -h
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/swap0    4194304     551M     3.5G    13%
/dev/gpt/swap1    4194304     550M     3.5G    13%
/dev/gpt/swap2    4194304     550M     3.5G    13%
/dev/gpt/swap3    4194304     550M     3.5G    13%
Total            16777216     2.1G      14G    13%

There are four 2TB disks in the machine.

The output of ps shows the status of all the VMs in D+

Code:
[user]@[machine]:[path] # ps |grep bhyve
94468  3  D+     8:40.97 bhyve: [vm name] (bhyve)

There are four VMs in total running on this Bhyve host machine at the moment.
 
Having 2.1G of swap used does indicate there is some memory pressure on your machine.

It's quite possible that guest VMs have been swapped out. Since guest memory is demand-paged, they won't use all the 3G you mention has been allocated, but only the memory that has been touched. A quick experiment with a 4G Ubuntu 15.04 VM shows that it uses 1.5G on booting.

This can be seen with bhyvectl --get-stats --vm=[I]vmname[/I]. Look for the 'Resident memory' stat, which is the number of pages currently in use by the guest. This will drop if pages are being swapped out.

The D+ process status also indicates that the guest may be reading in pages from swap.

Unfortunately there's no direct way to wire guest memory directly (e.g. with a bhyve option). It can be done indirectly by assigning a PCI passthru device, in which case all guest memory will be wired at startup time.
 
Thank you Grehan! Here's the output of that command. I don't see anything regarding "Resident memory" (or any thing memory related in the output actually). Here is what I got for one of my VMs:

Code:
[user]@[machine]:[path] # bhyvectl --get-stats --vm=vmname
vcpu0
vcpu migration across host cpus                 2324348
total number of vm exits                        85270267
vm exits due to external interrupt              829930
number of times hlt was intercepted             10649227
number of times %cr access was intercepted      1
number of times rdmsr was intercepted           21
number of times wrmsr was intercepted           8
number of monitor trap exits                    0
number of times pause was intercepted           594253
vm exits due to interrupt window opening        12599999
vm exits due to nmi window opening              0
number of times in/out was intercepted          14257475
number of times astpending at exit              12535
number of vm exits handled in userspace         71088414
vcpu total runtime                              1696565517711
number of ticks vcpu was idle                   506778957
timer interrupts generated by vlapic            9985251
ipis sent to vcpu[0]                            1787528
ipis sent to vcpu[1]                            0
ipis sent to vcpu[2]                            0
ipis sent to vcpu[3]                            0
ipis sent to vcpu[4]                            0
ipis sent to vcpu[5]                            0
ipis sent to vcpu[6]                            0
ipis sent to vcpu[7]                            0
ipis sent to vcpu[8]                            0
ipis sent to vcpu[9]                            0
ipis sent to vcpu[10]                           0
ipis sent to vcpu[11]                           0
ipis sent to vcpu[12]                           0
ipis sent to vcpu[13]                           0
ipis sent to vcpu[14]                           0
ipis sent to vcpu[15]                           0
number of NMIs delivered to vcpu                0
 
Yes. I'm running on version 10.0. I want to update to 10.1, but I was running into some issues updating my jails with ezjail and I didn't want the version running in the jail to be out of sync with what's running on the main "host" system, though this may be an irrational fear.


Anyway, I'm going to try disabling swap, giving the system a reboot, and see if I still have issues after that. The system has plenty of memory so hopefully this resolves things.
 
Hey Grehan,

I disabled swap, rebooted the system and thus far the VMs are running much quicker. I'll have to play around a bit longer and see if the issue arises again, but for the moment it looks like disabling swap was the trick that was needed. Thank you for your help Grehan!
 
Back
Top