vmstat avm column and UEFI versus BIOS

Trying to learn how to track what memory is being used by what processes, and I'm a bit puzzled by vmstat's avm column.

On some machines it's HUGE - many times the size of the RAM, and on others it seems to be closer to reflecting "reality" and the numbers from top.

The only pattern I've tentatively found is that some of the machines (those reporting bigger numbers) are set up as UEFI, and the other machines with smaller numbers are BIOS/legacy.

Dell R430, BIOS, 32G RAM:
Code:
% vmstat -w 5
procs  memory       page                    disks     faults         cpu
r b w  avm   fre   flt  re  pi  po    fr   sr da0 da1   in    sy    cs us sy id
0 0 0 5.1G   26G    12   0   0   0    19    9   0   0   18   234   182  0  0 100
0 0 0 5.1G   26G     0   0   0   0     0   14   0   0   12    48   138  0  0 100

Dell R430, UEFI, 32G RAM:
Code:
% vmstat -w 5
procs  memory       page                    disks     faults         cpu
r b w  avm   fre   flt  re  pi  po    fr   sr da0 pa0   in    sy    cs us sy id
0 0 0 530G  1.4G   432  16   0   0   770 2089   0   0  171 23525  1867  1  0 99
0 0 0 530G  1.4G  1059   0   0   0   548 1303   1   0   79 15578  1365  0  0 100

The second machine is (a lot) busier, but I'm not looking at those numbers.

Why does that second machine think it's got 530G of virtual memory, when an almost identical machine is reporting 5.1G?

I don't think it's down to the machine being busier.
I've got a quiet Dell T330 with 48G of RAM - UEFI - it reports avm of 515G, and an Intel NUC - 8Gb RAM - UEFI - reporting 514G.

Every BIOS machine I look at - has a lower avm.

Just wondering the why question?
 
It also seems to be common that it shows as 2TB under VMs e.g.
Code:
% vmstat -w 5
procs  memory       page                    disks     faults         cpu
r b w  avm   fre   flt  re  pi  po    fr   sr da0 cd0   in    sy    cs us sy id
0 0 1 2.0T   62M    48  72   0   0     1   18   0   0   50    26    85  8  3 89
0 0 1 2.0T   56M   316   0   0   0    62  185   5   0  117 10200   490  7  2 91

Not stopping me doing anything, just very curious!

EDIT: it's been asked before - not that long ago - the 2TB part: https://forums.freebsd.org/threads/huge-memory-usage.74748/

But I'm curious on the seeming UEFI/BIOS distinction.
 
By the way, vmstat(8) just grabs the values from the vm.vmtotal sysctl.
Just type sysctl vm.vmtotal and look at the “Virtual Memory” line; the first value in that line (labeled “Total”) is the one reported as “avm” by vmstat(8).

I can confirm that the amount of total virtual memory reported for machines booted via UEFI is much larger than for machines booted via legacy BIOS or CSM.

If you can read a little bit of C code: The value is computed in the vmtotal() function in /sys/vm/vm_meter.c. The interesting part begins at the comment “Calculate object memory usage statistics” (around line 240 in stable/12), and the value in question is accumulated in the variable called total.t_avm. Basically, the code goes through all VM objects known to the kernel, and calculates the sum of their virtual sizes. As you can see from the code, it takes care to exclude certain objects from the sum, in particular device objects (like /dev/mem), mounted file systems (these are VM objects, too!) and similar things.

My guess is: When the kernel boots via UEFI, it inherits certain data structures that are stored as VM objects, and the vmtotal() function hasn’t been adapted yet to exclude those from the calculation. Similarly in the case of a virtual machine when the hypervisor or “virtual BIOS” pass certain data structures on to the kernel during boot.

It might be worth opening a PR on bugzilla for this, so a developer familiar with the code in question may have a look at it. Note, however, that this is just a cosmetic problem. The calculated value is only used fro display purposes, but it does not have an impact on the kernel’s memory management.
 
It's a sum of all virtual memory pages. If you are looking of physical memory allocation use systat -v or top
 
It's a sum of all virtual memory pages. If you are looking of physical memory allocation use systat -v or top
Well, yes, of course, but that isn’t the question … The question is why the reported total value is several orders of magnitude higher when you boot via UEFI, versus booting via BIOS/CSM.
 
Thanks, olli.

I did find things like this:

But that seems to be more to do with how to detect the physical memory of the machine on boot-up (but perhaps something here causes the numbers shown later on.)

I will have a read of the code (for education and entertainment) so thank you for pointing me in that direction.
 
But that seems to be more to do with how to detect the physical memory of the machine on boot-up (but perhaps something here causes the numbers shown later on.)
Physical memory (a.k.a. RAM) and virtual memory are entirely different things.

For example, every process has its own virtual address space. Some of it is mapped to physical memory, but not all of it. When a process allocates a large chunk of memory, it is assigned to it within its virtual address space, but it’s not mapped to physical RAM yet. Only when the process actually accesses some location within that address space, the processor generates a “fault”, causing the kernel’s VM subsystem to actually map a page of physical RAM to that address.

Conversely, the same page of RAM can be mapped to the virtual address spaces of multiple processes at once. A typical example are libraries, like the libc (/lib/libc*). Its code is loaded into physical memory only once, but mapped into the virtual memory of many processes. So, if you sum up the virtual sizes of the processes’ code segments (which includes their libraries), you’ll end up with a number that is much larger than the physical RAM actually used by that code.

Another example is the X server (Xorg), in particular the display driver. It usually maps the display memory (video RAM) into its address space. It even does that several times, with different alignments, for better performance. That’s why the virtual size of the X server can grow huge, even though it doesn’t require that much main memory. By the way, UEFI also manages a graphical framebuffer which is passed to the kernel during boot – this shouldn’t account for a difference of 500 GB, though.
 
Back
Top