mlock support not being picked up by jailed application

Hey All,

I have been facing a problem while trying to run Hashicorp's Vault service and I was hoping for some guidance.
I've created a jail and have installed Vault (v1.8.7) into it via pkg -j vault-jail install vault. The vault installation seems to be working fine,
however when I try to start a `vault server` instance I am met with the following error:

Code:
Error initializing core: Failed to lock memory: cannot allocate memory
This usually means that the mlock syscall is not available.
Vault uses mlock to prevent memory from being swapped to
disk. This requires root privileges as well as a machine
that supports mlock. Please enable mlock on your system or
disable Vault from using it. To disable Vault from using it,
set the disable_mlock configuration option in your configuration
file.

Following this thread I took the following steps:
  • Set allow.mlock in vault's jail configuration
  • Upped the deamon class memory lock to 1024M
  • Confirmed the vault user is a deamon class member.
  • Confirmed that `disable_mlock` in Vault's configuration leads to Vault running without errors (as per error message's suggestion).
In the following evironment:
  • Vault v1.8.7
  • FreeBSD 13.1-RELEASE
It seems to me that for some reason that the allow.mlock directive isn't being respected, but I am not sure how that could be the case.
My jail.conf is as follows:

Code:
exec.start    = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown jail";
exec.clean;
mount.devfs;
path="/var/jails/$name";
mount.devfs;
exec.clean;
exec.start="sh /etc/rc";
exec.stop="sh /etc/rc.shutdown";
allow.raw_sockets=1;t

vault {
    ip4.addr="127.0.0.1";
    host.hostname="vault";
    allow.mlock;
}

My jail's `rc.conf`:
Code:
sshd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"

vault_enable="yes"
vault_user="vault"
vault_group="vault"
vault_login_class="root"
vault_syslog_output_enable="yes"

Any suggestions would be greatly appreciated.
Thanks!
 
Code:
             allow.mlock
                     Locking or unlocking physical pages in memory are
                     normally not available within a jail.  When this
                     parameter is set, users may mlock(2) or munlock(2) memory
                     subject to security.bsd.unprivileged_mlock and resource
                     limits.
Did you perhaps disable that security.bsd.unprivileged_mlock on the security options of the installation?
 
A quick check with sysctl shows that security.bsd.unprivileged_mlock: 1 on the jail host and within the jail. This seems correct to me.
 
Code:
             allow.mlock
                     Locking or unlocking physical pages in memory are
                     normally not available within a jail.  When this
                     parameter is set, users may mlock(2) or munlock(2) memory
                     subject to security.bsd.unprivileged_mlock and resource
                     limits.
Did you perhaps disable that security.bsd.unprivileged_mlock on the security options of the installation?
Revisiting this problem. Do you perhaps have any other ideas?
 
I am also experiencing the same error whenever I run vault outside of the vault.
Should running vault as root in the host OS still be causing problems with mlock?
 
It's daemon (in case it's misspelled for you somewhere), and try truss'ing the vault to see just how much it tries to lock?
 
Not sure if this is relevant but you have typo around allow.raw_sockets in the config.
Ah, fortunately this was just an error in transcription. The error doesn't exist in the host's config.

It's daemon (in case it's misspelled for you somewhere), and try truss'ing the vault to see just how much it tries to lock?
Thanks, I don't think daemon comes up anywhere in any configurations that I touch but I will take another look around. As for the number of times mlock is called before failure, it is once:
mlockall(MCL_CURRENT|MCL_FUTURE) ERR#12 'Cannot allocate memory'

Otherwise, I am not seeing any errors in the system calls.
 
I tested this in my VM and jail in it. Did a simple test and it is working for me (13.0-RELEASE-p11) when jail config has allow.mlock; in.
In jail I tested it with:
Code:
#include <stdio.h>
#include <sys/mman.h>

int main() {
        char* map;
        if ((map = mmap(0, 4096, 7, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)) == (void*)-1) {
                perror("mmap");
                return 1;
        }
        printf("map: %p\n", map);

        /*
        if ((mlock(map, 4096)) == -1) {
                perror("mlock");
                return 2;
        }
        */

        if ((mlockall(MCL_CURRENT|MCL_FUTURE)) == -1) {
                perror("mlockall");
                return 2;
        }

        return 0;
}
Can you share the config you use for your vault to start the server? I'm not familiar with the vault itself.
 
If you are referring to my jails conf, I will repost the configuration I am using from above:
Code:
# jail.conf
exec.start    = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown jail";
exec.clean;
mount.devfs;
path="/var/jails/$name";
mount.devfs;
exec.clean;
exec.start="sh /etc/rc";
exec.stop="sh /etc/rc.shutdown";
allow.raw_sockets=1;

vault {
    ip4.addr="127.0.0.1";
    host.hostname="vault";
    allow.mlock;
}

If you are referring to the vault's configuration, here is is:
Code:
storage "raft" {
  path    = "/var/lib/vault/data"
  node_id = "node1"
}

listener "tcp" {
  address     = "127.0.0.1:8200"
  tls_disable = "true"
}

api_addr = "http://127.0.0.1:8200"
cluster_addr = "https://127.0.0.1:8201"
ui = true

I've since tried running Vault inside and outside of the jail, both as `root` and as a `vault` user who is defined as such:
Code:
vault:*:471:471:Vault Daemon:/nonexistent:/usr/sbin/nologin
I have also tried multiple versions of Vault which the community has noted resolves a previous mlock issue, but to no avail.
 
I was interested in the vault config, thanks. Interestingly enough FreeBSD is returning error that is not described in the man page - EAGAIN, errno 35.
Code:
 2311: mlockall(MCL_CURRENT|MCL_FUTURE)         ERR#35 'Resource temporarily unavailable'
I'd say that vault should test for this condition but as I've mentioned even man page of mlock() doesn't mention this. Googling around revealed though that EAGAIN was used before to indicate the memory limit to lock exceeds the system wide limit.
But raising vm.max_user_wired to insanely big values didn't help (while vm.stats.vm.v_user_wire_count was 0).
Quick check of sys/vm/vm_mmap.c shows that it's a default return value (not a success, not a resource issue) is actually EAGAIN (so I'd say it should be in man page then).
Quite interesting ...

You could always use disable_mlock = true in the vault config. Not sure how big of an issue that is in your situation though (how big your memory pressure on your system is).
 
Quick check of sys/vm/vm_mmap.c shows that it's a default return value (not a success, not a resource issue) is actually EAGAIN (so I'd say it should be in man page then).
You are likely looking at the sys_mlockall() source, and EAGAIN is documented for that mlockall() syscall.
 
You could always use disable_mlock = true in the vault config. Not sure how big of an issue that is in your situation though (how big your memory pressure on your system is).
Unfortunatly this is not an option for me as it is (intenteded to be) a production environment.
I am starting to think that I may have to fall back to Linux, as I have had success there before.

Unless anyone has any other ideas?
 
Interesting thing is this is not a jail thing. I wanted to debug it further but the GO dependencies made it too unattractive for me to dive in. The failure occurs in (relative to pkg src root) work/vault-1.8.7/vault/core.go in mlock.LockMemory function which does the syscall from sdk/helper/mlock/mlock_unix.go. It maybe worth checking further how EAGAIN is handled there.

Depending on the server use this may not be a problem though. Because even with the locking enabled if you hit the actual memory pressure server may go down anyway. While locking may help is certain situations it's not a solution to all memory-pressure issues.
 
It is rather interesting problem though. Error codes are not dealt with in go scripts, meaning if syscall fails it just fails. I found the link to all vault versions and did a small test. I used FreeBSD 12.3 and 13.0 with handful of vault versions. The highest vault version I was able to run was v1.9.7 (confirmed mlockall returned 0) on FreeBSD 12.3. None worked on 13.0.
 
I was having a similar issue but with MongoDB in a jail when I would enable TLS support, the server wouldn't start.

It was showing something like Failed to mlock: Cannot allocate locked memory. see: https://dochub.mongodb.org/... : Operation not permitted

Adding allow.mlock; to my /etc/jail.conf and restarting my jail resolved the issue and allowed MongoDB to start and perform TLS connections.
 
Back
Top