Solved zfs and jail hostid mismatch in FreeBSD 14.1

Hi,

and another really strange effect:

in jail:
Code:
[root@test ~]# zpool status -v
  pool: vsd
 state: ONLINE
status: Mismatch between pool hostid and system hostid on imported pool.
        This pool was previously imported into a system with a different hostid,
        and then was verbatim imported into this system.
action: Export this pool on all systems on which it is imported.
        Then import it to correct the mismatch.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
  scan: scrub repaired 0B in 00:00:07 with 0 errors on Wed Apr 17 17:33:16 2024
config:


        NAME        STATE     READ WRITE CKSUM
        vsd         ONLINE       0     0     0
          xbd4      ONLINE       0     0     0


errors: No known data errors

in host:
Code:
[root@fbsd14clone1 ~]# zpool status -v
  pool: vsd
 state: ONLINE
  scan: scrub repaired 0B in 00:00:07 with 0 errors on Wed Apr 17 17:33:16 2024
config:


        NAME        STATE     READ WRITE CKSUM
        vsd         ONLINE       0     0     0
          xbd4      ONLINE       0     0     0


errors: No known data errors

I closed the jail, exported and reimported the pool from host and restarted jail, but same error regarding the mismatch in jail.
 
Yes, especially because zpool status reports it as an error (no clean status and action advised -> error). Otherwise I wouldn't care.
 
How are you making the pool accessible to the jail?
Hi adorno,
what do you mean exactly?

It's created and mounted from the host - start params:
securelevel=3 devfs_ruleset=4 allow.quotas=1 allow.set_hostname=1 allow.raw_sockets=0 allow.chflags=0 allow.sysvipc=1 allow.socket_af=0 allow.mlock=1 enforce_statfs=1
 
devfs_ruleset=4 nodying enforce_statfs=1 host=new ip4=disable ip6=disable osreldate=1401000 osrelease=14.1-RELEASE-p1 parent=0 nopersist securelevel=3 sysvmsg=inherit sysvsem=inherit sysvshm=inherit vnet=inherit zfs=new allow.nochflags allow.mlock allow.mount allow.mount.nodevfs allow.mount.nofdescfs allow.mount.nonullfs allow.mount.noprocfs allow.mount.notmpfs allow.mount.nozfs allow.nonfsd allow.quotas allow.noraw_sockets allow.noread_msgbuf allow.reserved_ports allow.set_hostname allow.nosocket_af allow.suser allow.sysvipc allow.unprivileged_proc_debug children.cur=0 children.max=0 cpuset.id=3 host.domainname="" host.hostid=0 host.hostuuid=00000000-0000-0000-0000-000000000000 ip4.saddrsel ip6.addr= ip6.saddrsel zfs.mount_snapshot=0
 
Hi adorno,
what do you mean exactly?

It's created and mounted from the host - start params:
securelevel=3 devfs_ruleset=4 allow.quotas=1 allow.set_hostname=1 allow.raw_sockets=0 allow.chflags=0 allow.sysvipc=1 allow.socket_af=0 allow.mlock=1 enforce_statfs=1
In your first post it looks like you are managing the zpool from the host and from within the jail at the same time. If that's the case, it isn't surprising that zfs isn't too happy. I might have misunderstood things, but it could be helpful in any case if you could describe what you want to achieve, and how you're trying to achieve it.
 
🤷‍♂️
especially because zpool status reports it as an error (no clean status and action advised -> error)
You are not supposed to have administrative access to the pool from two machines (the host and the jail) at the same time.
Even though it's a single physical machine and a single kernel and a single instance of ZFS driver, you are still confusing ZFS by doing what you do.
No surprise (to me) that it confuses you in return.
 
Hi Andriy,
hmm, that was possible for years in zfs before openzfs - that is the reason it's confusing me, as openzfs was announced as replacement :)

So, I maybe confuse openzfs with this doing, never zfs. At least I would love a releasnote etc. about that, which I didn't found anywhere (sure, maybe I missed that).

In the end, I can ignore that? I would expect an info on FreeBSD within openzfs' zpool status about that, as jails are common in that os.

Thanks a lot in advance for bringing some light to me in that case.
Jimmy
 
People usually do not expose the pool to jails.
The pool is managed from the host.
Individual datasets can be "jailed".
Utility of a jail is greatly reduced if it can do everything the host can do (e.g., destroy the whole pool).
 
I understand that you are making assumptions here, but unfortunately this does not help in assessing whether it is an error or just a message, which in this case looks like an error.
I have about 1,700 hosts here, each with its own jail - and yes, the 'management' is done by the host, but listings, quota infos etc. are also needed in the jail. Reading something is not destroy right away...

And I wouldn't consider this to be so unusual - but that wasn't the point of my question. My question is still: is it a problem, or does it just look like problem...?

Again, many thanks for your helpful input!
 
Which assumptions did I make?

As I said, the proper way to let jails access datasets is to delegate / jail the datasets.
If you want a jail to have administrative access on the pool level, then you are on your own.

Regarding an error message vs a real error, for me it's this:
  • if something tells me that I have an error, then it is an error message
  • if I actually cannot do something (it fails, it's broken, etc), then it is an error
It's possible to have an error without a message, a message without an error, a message and an error.
See for yourself what you have.
I already tried to explain the reason behind the message.
 
You made an assumption about 'what people usually do', nothing more.

And, as written, I don't want administrative access to the pool from inside the jail. I just want to know about the 'state' of the pool with a clean result from 'zpool status' if no corruption exists. One can send tons of readonly queries from the jail to the kernel and hardware things without irritating messages.

After searching the OpenZFS code a bit, it's easy to solve: setting the jail param host.hostid to the sysctl kern.hostid of the jails host. By default it's just not set. Now the query result is as expected (at least for me). And as the jail has no right's for pool manipulations, it feels right to me.

Anyway - thanks a lot for your input!

As you have obviously more knowledge about zfs than other people, maybe you have some valuable input for that question, too:
https://forums.freebsd.org/threads/...pool-dataset-permission-denied-in-jail.94187/

In all versions of zfs before OpenZFS, it was not a problem to query (no manipulation) the quotas from the jail when the rights where set in the host. Since OpenZFS it's not possible anymore. Just 'cannot get used/quota for pool/dataset: permission denied' as if no rights where set.

Thanks again!
 
Back
Top