Limit clamav memory usage

ClamAV operates as a user:
Code:
clamav:*:106:106:Clamav Antivirus:/nonexistent:/usr/sbin/nologin
Maybe you could limit memory to this user.
 
Didn't work, says:
Code:
ibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes).
LibClamAV Error: mpool_malloc(): Can't allocate memory (16781312 bytes)
 
A sincere question: why?

Is there, for example, a problem with overall performance of the system with scanning at a particular time of day?
Not a FreeBSD user, but on Linux, it's because clamav is about scanning mail traffic (usually as part of a mail server, subordinate to amavisd, which orchestrates spam and virus detection). You can't "do it later" in most implementations.

Clamav has two modes of operation. In the first, it runs as a daemon, loads all its definitions into RAM, and scans all mail it is asked to process using the rules/patterns/signatures in the RAM table. Putting any part of the table on disk or even in swap would make it too slow to be useful. In the second, it runs as a command but still has to load the whole definition set (as I understand it). In the implementations I'm familiar with, one primarily uses the former mode and, if that fails, it automatically will revert to the latter.

Additionally, the default is that when a new set of definitions becomes available (several times daily), it keeps the old process active, spawns a new scanning process, runs a validity check on the new definition table, and only then switches the scanning to the new process. This means there are two complete sets of definitions in RAM at various points in the day, but scanning continues uninterrupted. This behavior can be disabled, but at the cost of delays in processing email.
 
Sorry, rereading the thread carefully, I suspect your question might mean "why limit the memory use" -- which would seem to have an obvious answer: (s)he's unable or unwilling to give clamav the memory it wants (or needs). I took it to mean "why does clamav require so much RAM", which is what I responded to. Apologies for any confusion!
 
A sincere question: why?

Is there, for example, a problem with overall performance of the system with scanning at a particular time of day?
Limiting the RAM from the OS will make the app fail when it reaches that amount of memory.

It will not make the application work within those constraint.
Why limit memory use? How about out-of-control growth of the amount of RAM the application requests, which leads to OOM killing of other processes and crashing the whole machine? This used to be an issue when machines had 16 MB of RAM back in 1990s. Nowadays, RAM is measured in gigabytes, so everybody thinks that RAM is unlimited. A good discussion on why limiting an application's RAM usage is a good idea: PR 263436
 
Well, that doesn't change the fact that limiting the application to a certain amount of RAM will kill that app when it grows, not make it use less memory.

What you want for a survivable limit is that paging sets in when a certain amount is reached. You can only have that by stuffing the application into a virtual machine with its own kernel, limited in RAM by the host, add some swapspace. That would actually limit the size of the application without killing it. Performance might be bad, though.
 
Well, that doesn't change the fact that limiting the application to a certain amount of RAM will kill that app when it grows, not make it use less memory.

What you want for a survivable limit is that paging sets in when a certain amount is reached. You can only have that by stuffing the application into a virtual machine with its own kernel, limited in RAM by the host, add some swapspace. That would actually limit the size of the application without killing it. Performance might be bad, though.
Maybe it's time to revisit the UNIX philosophy of keeping applications simple? Just look at what we can pull off with sed and awk...
 
Back
Top