MySQL/MariaDB in jail on ARM64 fails

Right, I'm stumped. I got a Raspberry Pi4B with FreeBSD 14.3-RELEASE on it. I'm hoping to use this for a few workloads that I'd like to separate into their own jails. This works fine on my AMD64 server elsewhere, but not on the Pi:

Code:
test {
  host.hostname = "test.area536.com";
  allow.raw_sockets;
  allow.mlock;
  allow.sysvipc;
  allow.mount;
  allow.mount.devfs;
  allow.mount.nullfs;
  allow.mount.procfs;
  enforce_statfs = 0;
  vnet;
  devfs_ruleset=7;
  mount.devfs;
  exec.prestart = "ifconfig epair2 create up";
  exec.prestart += "ifconfig epair2a up";
  exec.prestart += "ifconfig bridge0 addm epair2a up";
  vnet.interface = "epair2b";
  path = "/j/test";
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.poststop = "ifconfig epair2a destroy";
}

Inside the jail, I tried both MySQL 8.0 and MariaDB 10.6. Both fail to start and throw permission errors regarding /var/db/mysql. No files appear there at all, ever, no matter what I do. The mysql user simply won't write anything there, even though it has every permission to do so.

When I install either of the two database applications directly onto the host, nothing is wrong and things just start up fine. Am I missing something critical in my jail configuration? I'm running off the standard Raspberry Pi image, so it's all one big UFS filesystem.
 
Could it be a securelevel thing? Maybe you can run sysctl -a to check whether kern.securelevel is set. I believe this has a default that is always at least == host and would override user permissions. Might be something to check.

Also probably if you could share the actual error message.
 
Securelevel on the host is -1. Starting things up gives me:

Code:
root@test:/ # /usr/local/etc/rc.d/mysql-server start
Starting mysql.
su: /bin/sh: Permission denied
/usr/local/etc/rc.d/mysql-server: WARNING: failed to start mysql
root@test:/ #

This is all default, no changes. I just installed MariaDB 10.6 in here, added mysql_enable="YES" to /etc/rc.conf in the jail and ran the above.
 
Both fail to start and throw permission errors regarding /var/db/mysql. No files appear there at all, ever, no matter what I do. The mysql user simply won't write anything there, even though it has every permission to do so.
What does ls -ld /var/db/mysql show from inside the jail?
 
Code:
root@test:/ # ls -ld /var/db/mysql
drwxr-xr-x  2 mysql mysql 512 Sep  4 13:53 /var/db/mysql
root@test:/ #
 
from jail console try
su -m mysql
touch /var/db/mysql/testxx-$$ /var/tmp/testxx-$$
ls -l /var/db/mysql/testxx-$$ /var/tmp/testxx-$$
rm -f /var/db/mysql/testxx-$$ /var/tmp/testxx-$$
 
Installing MariaDB/MySQL system tables in '/var/db/mysql' ... 2025-09-04 13:54:20 0 [ERROR] mariadbd: Can't create/write to file '/var/db/mysql/aria_log_control' (Errcode: 13 "Permission denied")

hmm....
 
It can't create a log file in /var/log/mysql either even though that dir is actually there and valid.. When I do this from the host itself: smooth sailing.
 
Code:
drwxr-xr-x  2 mysql mysql 512 Sep  4 13:53 /var/db/mysql

If you change this to drwxr-drwxr-drwxr and it works, it means that for some reason the jail user is not the owner of the running the process.

Maybe wiser minds can corroborate
 
Ok.. you win the internet for today!

Now I do still have to figure out where in the chain of directories the issue is, but I just chmodded 777 every directory from the host filesystem all the way up to the jail's /var/db/mysql and now things do work.
 
That's a +1 for mankind.

Probably some random default setting that depends on some other default setting that isn't present on your other install changed the user chain you were expecting. At least now you know which needle needs to be extracted from the haystack, happy hunting.
 
It's not like I touched the whole box with this. Just the jail and the two directories leading up to it. The jail itself is toast anyway so that'll get recycled, and fortunately my brain is not too far gone yet to wrap around two directories above it.
 
Found the culprit. The jail's own root directory /j/test (seen from the host) was chmod 700, root:wheel. Once that gets set to 0755 things do work correctly.
 
Found the culprit. The jail's own root directory /j/test (seen from the host) was chmod 700, root:wheel.
See. That's the solution. You had an error about permissions, setting various directories (or files) to 777 and finding out it works doesn't add new knowledge to diagnose the problem. It worked with 777 so it must be permission related, which you already knew because of the "permission denied" error. It's therefor a useless "test", it doesn't provide any new information. Plus it adds a whole bunch of risks, things started working so those 777 permissions are often forgotten about, which could then lead to some serious security issues.
 
I get your point, thank you, and I disagree with it. Flipping the permissions to 777 for all directories leading up to my jail actually does lead to new information: it considerably narrowed the haystack down to simple file system permissions rather than issues with the jail itself or some weird difference between AMD64/ARM64 and any defaults that may differ between those platforms. Having that information, I nuked the whole thing from orbit and it's now running a clean slate with a new deployment from my Ansible playbook. The only difference? Jail root dirs now get 0755 instead of 0700, all traceable through source code version control with a nice comment and deployment logs.
 
I thank you for your help, the issue is resolved. Let's agree to disagree here about the security implications of what I did in the sandbox in my basement and leave it at that.
 
Back
Top