FreeBSD acting badly under ESXi

I have just observed that all my FreeBSD machines running on ESXi 5.5 slow to a crawl when there is heavy i/o on the host machine.

However linux machines are absolutely fine as well as the host itself.

I am also not just talking about a slow down of peak performance, but the machines I cannot even login to ssh due to massive packet loss.

Here is some data from 'esxtop'

Showing 4 VMs

Fang, Test, and HardenedBSD1 are all BSD machines, whilst Seed-Debian is linux.

Notice how Seed-Debian has 0.00 in SWCUR.

Code:
 4:11:08pm up 404 days  5:23, 503 worlds, 4 VMs, 14 vCPUs; MEM overcommit avg: 0.00, 0.00, 0.00
PMEM  /MB: 16291   total: 1238     vmk,8633 other, 6419 free
VMKMEM/MB: 16212 managed:   651 minfree,  7761 rsvd,   8450 ursvd,  high state
PSHARE/MB:     760  shared,      55  common:     705 saving
SWAP  /MB:    1091    curr,     678 rclmtgt:                 0.00 r/s,   0.00 w/s
ZIP   /MB:      67  zipped,      43   saved
MEMCTL/MB:       0    curr,       0  target,    2571 max

     GID NAME               MEMSZ    GRANT    SZTGT     TCHD   TCHD_W    SWCUR    SWTGT   SWR/s   SWW/s  LLSWR/s  LLSWW/s   OVHDUW
 1732885 Fang             4096.00  3605.96  3951.73   573.44   573.44   492.84   138.11    0.00    0.00     0.00     0.00    19.37
   12009 Seed-Debian      4096.00  4096.00  3855.95   122.88    81.92     0.00     0.00    0.00    0.00     0.00     0.00     6.64
187709107 Test             1024.00   622.61   447.04    61.44    40.96   254.29   248.09    0.00    0.00     0.00     0.00     7.17
39964045 HardenedBSD1     1024.00   665.00   705.37    61.44    30.72   344.06   292.58    0.00    0.00     0.00     0.00     7.17

Also in the main esxtop screen VMWAIT was about 500% for each BSD hypervisor but 0% for debian.

Apparently a high SWCUR figure means high swap activity, but ironically 2 of the BSD machines have all their ram reserved meaning they should never ever be swapped, debian has zero reserved. So something odd is going on here, and I am wondering if anyone has seen this before or has any ideas.

According to the performance graphs the BSD machines have no swapping as balloon reports zero, and I understand balloon to be a measure of ram used in the swapfile. However all the VMs report 0 for this yet esxtop reports 1069 MB of swap and 654 as cur.

I am guessing GRANT is physical ram assigned, so it would seem ESXi is favouring linux machines and ignoring the reservation settings completely, but I got no idea why.
 
You are right.
From the VMware documentation:
Q: Why is "GRANT" less than "MEMSZ"?
A: Some guest physical memory has never been used, or is reclaimed by balloon driver, or is swapped out to the VM swap file. Note that this kind of swap is host swap, not the guest swap by the guest OS.
What does top on the host tell about its memory situation?
 
Sorry I didnt get back to you.

top shows more than 2 gig free memory, I dont think ESXi has the best memory management in the world. Undocumented it seems to reserve about 4 gig of ram for its own functions, so not a light hypervisor. I plan to use proxmox on another host. (It also reserves almost a full core of cpu MHz as well)

I have set all memory locked on my FreeBSD machines now.

After I locked the FreeBSD machine's to use no swap, the debian machine had its grant reduced instead of utilising swap on the host, so I think there is more functionality on linux VM memory control with ESXi.
 
Thank you for reporting back!
Your findings confirm my suspicions I got when I tried VMware, it seems a real resource eater.
Maybe there are better "balloon drivers" for Linux than for FreeBSD (if any for the latter), explaining why Linux client seems better "integrated"?
 
What build of esxi 5.5 are you running? My esxtop -a shows totally different columns of which there are 29 of them.

Code:
~ # uname -a
VMkernel esxi01.something.net 5.5.0 #1 SMP Release build-3116895 Oct  2 2015 12:27:22 x86_64 GNU/Linux

Edit - that means its ESXi 5.5 Update 03.

Intel G3220 Processor, 16GB Ram with 11 FreeBSD 11.1-RELEASE VM's and 5 Ubuntu 14.04 VM's.

I do have a 240GB SSD drive for Read Cache that makes a HUGE performance increase for Disk I/O, but overall, the Linux VM's are the pigs compared to FreeBSD.
 
Back
Top