useless rabbit hole about swap usage / interpretation please ignore

I have a small VM, 1 CPU, 2GB, that I run Wordpress and a few other things on. The default installer created a 2G swap partition.
Once booted up, I tried to push the system hard by loading all the pages on the website, clicking on downloadable content etc to really work the disk and ARC. It reached 20% swap usage.

I'm trying to figure out if I need 2GB of RAM, so I tried to turn off the swap, and got a "cannot allocate memory" error.

But, the system is plenty usable without swap. So what is this telling me? I get that I made it use a bunch of memory handling the workload I threw at it, but then shouldn't the system be able to discard cached data when I tell it to swapoff?

I'm relatively new to FreeBSD, so its not clear to me why the machine runs fine without swap, but then after some workload, can't live without it. Can anyone explain?
 
Several possible reasons. One is: Memory usage can be very spiky and unpredictable. It could be that the tests you did after turning off swap just got unlucky. Memory usage is also often (not always) cumulative: A program starts, and allocates more and more. As long as it succeeds, it will often hang on to the memory. There is no mechanism for the OS to send a "back pressure signal" to applications that says "please release memory now, we're having a shortage". The only thing the OS can do (for example as a reaction to swapoff) is to remove pages from the internal buffer cache (which mostly holds things that have been read from / written to file systems). That might not have been enough in your case to save the system. That buffer cache usage is quite flexible, and when the system runs low on overall memory, it will often (not always) continue running but slowly, with minimal caching.

If I remember right (big if!), In FreeBSD the swap usage only reports things actually have to be in swap, not how much has been swapped out previously. If that's right, then the 20% swap usage means that your peak memory usage was 1 GB + 20% of 2 GB = 1.4 GB. It is possible that the extra 0.4 GB was just a little too much to handle.

My advice: If the system "feels good" with 1 GB + swap, leave it alone. Premature optimization is the root of all evil. Remember the old mantra about how to administer a computer: You need a man and a dog. The man has to feed the dog. The dog has to bite the man if he tries to mess with the computer.
 
why the machine runs fine without swap, but then after some workload, can't live without it. Can anyone explain?
Nothing to explain. You probably have a program hogging RAM.

My opinion is no swap at all.. I run all SSD and lots of RAM.
If a program bombs out and you feed it more memory are you really helping things?
Its going to fail anyway once swap is ate up.

So really this is about malformed programs. They eat swap after devouring real memory.
I would be monitoring top to see where your problem is.

I despise swap because I want to minimize disk writes to SSD. I just don't like the concept.

Buy more RAM or fix/replace the broken program is how I feel.
From my vantage point FreeBSD uses very little RAM at all. So its down to the programs you use.
They can sometimes act badly.


I'm relatively new to FreeBSD,
Welcome to the Forum. I have seen your handle at that other place...
 
If a program bombs out and you feed it more memory are you really helping things?
Depends.

Some programs are broken, eat memory wastefully, eat more and more, and if you give them infinite memory they will eat it all.

Other programs simply need more memory than is currently available. They often need memory only for a short period, and then they either release it, or exit gracefully (which releases it). Note that with today's more complex software architectures, the case of "exit gracefully" may happen internally to another program. If they need more memory than the machine has RAM, then swap may be able to get the whole system through the crisis without causing a big mess. And swapping it out a little bit for a short period really isn't a big deal from an IO viewpoint. Also remember: If one program is using nearly all of memory (whether because it actually needs it, or because it is being wasteful) can have bad effects on other innocent programs. The program that fails due to being out of memory is not always the one that uses most memory, and in particular not always the one that uses memory inefficiently.

Also: The distinction between "programs that are wasteful or broken about using memory" and "programs that happen to need lots of memory" is not black and white. There are lots of shades of grey. And today, the scarce resource is less hardware, and more software engineer time. So there are programs out there that are imperfect, and a bit inefficient, without being completely awful. And those really get helped by using the swap file: When there is memory pressure, they get swapped out. For those memory pages they really use, those will get paged back in, why rarely used (or unused) pages will remain in the swap file. So having some swap allows us to wallpaper over cracks in programs that are not great, but good enough to use.

I despise swap because I want to minimize disk writes to SSD.
I doubt that for common workload patterns swap writing to SSD is a significant fraction of all writes. For example compared to logging, which is particularly bad once you take write amplification in the flash hardware into account.

Buy more RAM or fix/replace the broken program is how I feel.
Both suggestions may not be realistic. Buying more RAM is expensive, and if it requires replacing the system or motherboard, may be prohibitively expensive. Software that is not perfect is a fact of life. Using expensive and scarce human time to fix software for small systems may not be an overall optimization goal of the system.
 
Back
Top