Solved Planning ZFS encrypted swap and clearing it later

My main questions are:
- What is the best way to clear a ZFS encrypted swap if it fills up?
- If we allocate swap space at the rate of twice the available RAM, then how will we have enough memory to clear the swap?

It seems like, during a situation where we intervene to maintenance or clear swap, we need to be able to pick it all up and put it in RAM; or, we need to be able to carve out a section of swap and either clear it or transplant it in another place so that it can continue. If we take the advice to establish a swap at twice the RAM, are we creating a situation where it's not feasible to clear the swap?

When I try the swapoff/swapon technique, it works well with an additional swap file that I would build, but it's not helping with the full swap space on the encrypted swap. I receive an error message about the encrypted swap that says it can't allocate memory.

I will probably try to expand the swap with ZFS commands; but, it seems like that might be a temporary solution. Somehow part of the problem here is that the programs I am running are just running and not really checking to see that they have sufficient resources available. Is there a way to reserve a minimum amount or section of swap for a program by using ZFS commands?
 
This doesn't make any sense. Swap should be on an empty partition, not one with a filesystem. This is especially true with a copy on write filesystem like ZFS where it's going to put a ton of stress on the underlying disk. Trying to change the size of swap partitions is rather tricky and requires changing the size of other partitions on the disk in most cases.
 
Swap should be on an empty partition, not one with a filesystem.

1597179850795.png


gladiola said:
If we allocate swap space at the rate of twice the available RAM, then how will we have enough memory to clear the swap?
You misunderstand what swap is and how it's used.

When selecting partition sizes, keep the space requirements in mind. Running out of space in one partition while barely using another can be a hassle.

As a rule of thumb, the swap partition should be about double the size of physical memory (RAM). Systems with minimal RAM may perform better with more swap. Configuring too little swap can lead to inefficiencies in the VM page scanning code and might create issues later if more memory is added.
https://www.freebsd.org/doc/en/books/handbook/bsdinstall-partitioning.html

I'm not confused. I have an encrypted swap partition that fills up. swapon and swapoff won't clear it because I don't have enough memory to hold the contents. Notice, above, that the recommended practice was to allocate twice as much swap as physical RAM. Well, if we do that, then how will we clear it when it fills up?
 
I was able to fill up that swap at least three times in a row before my OP. I will do some more runs, fill it up again, and make some photos so that you all can see better. There's a "cannot allocate memory" error message that follows the .eli partition after the swapoff/swapon. Maybe then you'll see what I'm talking about.

Still, it looks like there is going to be a need for advice on getting the swap cleared. If it's twice as big as the available RAM, and the swapoff procedure needs to hold the swap in RAM: it should not be a surprise that we can't clear it this way. What do people do to prevent that problem? Is there another way to clear it?
 
The swap file is like a extension of RAM.
If a application ask the kernel for more RAM and there is no free RAM, the kernel transfers some data from the RAM into the swap. The new free RAM can be used by the application now. If the original data in the swap needs to be accessed, this data is transfered back to RAM (and some other data to swap if no free RAM).
Thats the reason why computers with too few RAM becomes so damn slow.

If you have the issue your swap gets filled up, your real issus is a application who fills the RAM. This is a common issue called memory leak. Instead of searching a swap issue, you have to trac and fix the application who does the leak.
 
The question is about how to clear the swap when it fills up in an encrypted partition and how to plan the disks to avoid that situation where swapoff/swapon can't clear it.
 
The question is about how to clear the swap when it fills up in an encrypted partition and how to plan the disks to avoid that situation where swapoff/swapon can't clear it.
Stop the processes that are eating away the memory. If stopping the processes frees enough memory you can try to remove the swap. If there is a memory leak that used memory won't be freed when the processes are stopped. And the only way to recover from that is to reboot.
 
I was able to add a swap file. That provides some swap. The processes were getting shut down automatically; there were several messages about PIDs dropped because of no swap.

Is there not a way to detatch the partition and reattach it cleared? Or to plan ahead to be able to do that?

Maybe it couldn't allocate memory because there was another process still running that was holding on to the swap. So the stopped PIDs may have been other, follow-on, failures.

We will see. I'll probably be able to hit the error messages again soon.
 
Deal with the problem, not the symptoms. Running out of swap is a symptom of an underlying problem, something or some process is eating away all memory. If you don't deal with the underlying problem you will continue to struggle with the symptoms. Stop focusing on your swap and find out why you keep running out of memory.
 
Yes, you are right about that. This is happening on my poudriere server. Some of the long builds seem to be hitting it with this. One of the llvms or rust builds. Since it's not a program I wrote, pouring over the code to find a memory leak probably won't be fruitful.

So, I thought I could avoid the problem by doing multiple scripts to build ports per jail. Like, do a prepatory build of these compilers and follow it with another build of the desired ports with a break in between to clear the swap.

These same builds were running for several weeks with no trouble. So, I think the computer can basically do the job. I just don't like it that I'm in this position where I have to power cycle the computer because of a problem.

Like, I wonder if there is a way to assign a swap file to a build jail. So if you need swap when running in JID xxx, the use this swap file. Everyone else, use the regular swap partition. Maybe something like that would give it a place to use swap without jeapordizing the assets used by the whole machine.

Last night's build has already run in to trouble. Will BBL.
swappagerB.jpg
 

Yes. Not everything that technically can be done is feasible. Using a file for swap is technically possible, but should not be recommended - because of these and other problems.

What should be done is, on installation, reserve a piece of disk about twice the intended ram size, and use that as a raw partition for (encrypted) swap.
It is quite normal for a system to use swap: there are many things in memory which are rarely used, and these can go to swap. Installing so much memory that swap will never be used, might be a waste of money.
It is also normal for port builds to consume swap - depending on the system configuration and number of parallel builds. This is not harmful - things may become slow if swap is not on SSD; in that case reduce number of parallel builds or make more ram available.
Swap will normally be attached at system boot, and it will not be "cleared", ever - because there is no need to do that.
 
This is happening on my poudriere server. Some of the long builds seem to be hitting it with this.
How much RAM does the machine have? And how many concurrent jobs are running? Mine is a Core i5 with 16GB RAM and 16GB swap, running 4 jobs. Never had issues running out of memory though.

One of the llvms or rust builds.
Yeah, I'm having regular problems with those too. My server often just reboots building those. No warning, not running out of memory, plenty of swap free, no panics, just an instant reboot.
 
The box has an i5 with 8GB RAM. Poudriere will build up to 4 at a time. Usually with llvm and these other longer ones, it'll quickly get down to 1 or 2 because of the dependency chains.

It started acting up last week. When mine jams, the elapsed time counter will just stop. Sometimes, it'll continue and report the failed and skipped builds. Here lately, the swap's been filling up.

By clearing the swap, I mean making it available for fresh reuse. In the man pages for swapon swapoff, it mentions deallocating the metadata.

https://www.freebsd.org/cgi/man.cgi...opos=0&manpath=FreeBSD+12.1-RELEASE+and+Ports

For example, when both the swap partition and the swap file fill up, swapoff will clear the file, but I couldn't get it to clear the .eli partition for reuse.

I also have a broken secondary partition table on that disk. Didn't know what to do about that until this morning. I'm wondering if this is a contributing factor. Will try a gpart restore later.
 
The last times I have run devel/poudriere a swap file on the ZFS system was not really helpful. It provided swap space but also a lot of load, may be because of the traffic in terms of I/O to ZFS. Building devel/llvm and other big ones was possible on my system with a separate swap partition but not with a swap file. I had to restrict the big ones for one job at the same time only. But it was no fun with just 2GB of RAM. Currently I can live with the default options and use packages.
 
It worked on the next try. swapoff/swapon did clear the memory on the next run.

I was in a different place in a cascade of cronjob poudriere runs. So, there may have simply been better conditions at the time.

Before I tried to clear swap, I checked jls. This time there was something different: no jails running. Before this, I had to shut down stalled jails with
Code:
jail -r
to remove the jails. Perhaps there might have been other processes which had a lock; don't know. But, the expected commands worked; so I probably had another background condition that I didn't realize at the time.

Thanks to everyone for your help.
 
Back
Top