ZFS disk reset to a previous state after some time (hours?)

Hello,

I have an issue where the content of my disk seems to reset at a certain state after some time (hours). For instance, if I create a file in my home, the next day, the file will have disappeared. I first realized this with a jail I configured but whose configuration disappear after a while, like I mention in the following thread Thread jail-configuration-non-persistent.98685.
I don't really know when it started to behave like that. A week ago, I could update to 14.3, and the version is stil this one, so maybe something went wrong upon the update.

My intuition is that it comes from a misconfigured backup procedure. With zfs, I daily snapshot the content of my main pool and send it to a backup pool. Would it be that the mounted filesystem is the backup and not the main pool? Although I didn't have any issue in the past.
 
Files don't magically disappear, with any filesystem. There's always a reason. Usually it's pilot error.
 
Yes I agree. And because I don't understand what I could have to done wrong, or where to look, I've come to seek help.
 
Yes I agree. And because I don't understand what I could have to done wrong, or where to look, I've come to seek help.
If you hope to receive any help with your problem, you have to provide much more information about your setup. What ZFS pools do you have and how are they configured, what ZFS filesystems do you have and how are they configured, what's your backup procedure, how is it triggered and how often, etc. If files disappear, you can discover when they disappear simply by checking every minute (with a script) if the file is still there where it should be.
 
Thank you.

I wasn't sure zfs misconfiguration would make sense so I didn't want to flood my initial post with useless information.

I could reproduce the error. Still not sure what is misconfigured but, my backup dataset is actually mounted on the same mountpoints as my server pool. And I don't think that's normal.

Code:
 # zfs list -r zroot                                                                                :(
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
zroot                                         12.0G   880G    96K  /zroot
zroot/ROOT                                    6.90G   880G    96K  none
zroot/ROOT/14.2-RELEASE-p4_2025-07-21_183053     8K   880G  4.98G  /
zroot/ROOT/14.2-RELEASE_2025-07-21_174635        8K   880G  4.47G  /
zroot/ROOT/14.3-RELEASE_2025-07-21_183541        8K   880G  5.05G  /
zroot/ROOT/default                            6.90G   880G  5.08G  /
zroot/home                                     216M   880G   108K  /home
zroot/home/user                                215M   880G   215M  /home/user
zroot/jails                                   4.76G   880G   104K  /usr/local/jails
zroot/jails/containers                        4.57G   880G  4.08G  /usr/local/jails/containers
zroot/jails/media                              197M   880G   197M  /usr/local/jails/media
zroot/jails/template                            96K   880G    96K  /usr/local/jails/template
zroot/usr                                      288K   880G    96K  /usr
zroot/usr/ports                                 96K   880G    96K  /usr/ports
zroot/usr/src                                   96K   880G    96K  /usr/src
zroot/var                                     26.0M   880G    96K  /var
zroot/var/audit                                 96K   880G    96K  /var/audit
zroot/var/crash                                 96K   880G    96K  /var/crash
zroot/var/log                                 14.0M   880G  1.57M  /var/log
zroot/var/mail                                11.5M   880G  10.3M  /var/mail
zroot/var/tmp                                  260K   880G   100K  /var/tmp
# zfs list -r pool1/backups/server
NAME                                                               USED  AVAIL  REFER  MOUNTPOINT
pool1/backups/server                                         13.0G  4.73T    96K  /zroot
pool1/backups/server/ROOT                                    6.49G  4.73T    96K  none
pool1/backups/server/ROOT/14.2-RELEASE-p4_2025-07-21_183053     0B  4.73T  4.90G  /
pool1/backups/server/ROOT/14.2-RELEASE_2025-07-21_174635        0B  4.73T  4.45G  /
pool1/backups/server/ROOT/14.3-RELEASE_2025-07-21_183541        0B  4.73T  4.95G  /
pool1/backups/server/ROOT/default                            6.49G  4.73T  4.85G  /
pool1/backups/server/home                                    2.47G  4.73T   108K  /home
pool1/backups/server/home/user                               2.47G  4.73T   215M  /home/user
pool1/backups/server/jails                                   4.01G  4.73T   104K  /usr/local/jails
pool1/backups/server/jails/containers                        3.82G  4.73T  3.42G  /usr/local/jails/containers
pool1/backups/server/jails/media                              197M  4.73T   197M  /usr/local/jails/media
pool1/backups/server/jails/template                            96K  4.73T    96K  /usr/local/jails/template
pool1/backups/server/tmp                                     2.98M  4.73T   388K  /tmp
pool1/backups/server/usr                                      288K  4.73T    96K  /usr
pool1/backups/server/usr/ports                                 96K  4.73T    96K  /usr/ports
pool1/backups/server/usr/src                                   96K  4.73T    96K  /usr/src
pool1/backups/server/var                                     23.4M  4.73T    96K  /var
pool1/backups/server/var/audit                                 96K  4.73T    96K  /var/audit
pool1/backups/server/var/crash                                 96K  4.73T    96K  /var/crash
pool1/backups/server/var/log                                 11.7M  4.73T  1.43M  /var/log
pool1/backups/server/var/mail                                11.2M  4.73T  10.0M  /var/mail
pool1/backups/server/var/tmp                                  228K  4.73T   100K  /var/tmp

And during my daily backup, sending incremental snapshot to the backup dataset actually revert the state of the zroot. I noticed every snapshot had a size of 0 bytes.

Below the backup script.

sh:
#! /usr/bin/env sh

[ $(id -u) != 0 ] && echo "must be run as root" && exit
dataset=zroot
backup_dataset=pool1/backups/server

snapshot="${dataset}@$(date -I)"
last_snapshot=$(zfs list -H -t snapshot -o name ${dataset} | sort | tail -1)

echo -n "Creating snapshot ${snapshot}... "
zfs snapshot -r ${snapshot}
echo "Done"

echo -n "Sending incremental backup from ${last_snapshot} to ${backup_dataset}... "
zfs send --replicate -I ${last_snapshot} ${snapshot} | zfs receive -Fu ${backup_dataset}
echo "Done"

So first, my backup dataset is actually mounted on the same mountpoint as my server pool, both on zroot. t's clearly and issue isn't it? I would logically have it mounted on /pool1/backups/server (which don't exist by the way) and I certainly didn't do that on purpose.

I could manually backup zroot on another dataset on pool1 and after sending the snapshot the mountpoint also became /zroot.

Code:
# zfs snapshot -r zroot@test
# zfs send -R zroot@test | zfs pool1/test-backup
# zfs get mountpoint pool1/test-backup
NAME               PROPERTY    VALUE       SOURCE
pool1/test-backup  mountpoint  /zroot      received
 
Just by
the file will have disappeared.
backup procedure.
zfs, I daily snapshot
I would guess however snapshots are restored automatically (check if not only recents files are missing, but edited files stay at/fall back to a former state); maybe the backup procedure was upgraded recently, and somehow a bug got into that, so it restores snapshots instead of create new ones, or target and source was confused, or something like that...
Just as a spontaneous idea of mine, only.
 
I suspect your 'backup' pool1/backups/server/jails/* is actually being mounted on top of the 'original' zroot/jails/* filesystem, they both have the same mountpoint.

Whenever you made a change in /usr/local/jails/* this was actually done on pool1/backups/server/jails, and the backup procedure took the files from zroot/jails (which were hidden 'underneath' the pool1/backups/server/jails mount) and reverted your changes.

What does zfs get canmount pool1/backups/server/jails and zfs get mounted pool1/backups/server/jails say? A plain mount should also show how the filesystems might have been overlapping.
 
Yes and also every dataset from pool1/backups/server actually.
Code:
root@server /home/user # zfs get canmount pool1/backups/server/jails
NAME                             PROPERTY  VALUE     SOURCE
pool1/backups/server/jails  canmount  on        default
root@server /home/user # zfs get mounted pool1/backups/server/jails
NAME                             PROPERTY  VALUE    SOURCE
pool1/backups/server/jails  mounted   yes      -
root@server /home/user #
mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
/dev/gpt/efiboot0 on /boot/efi (msdosfs, local)
zroot/home on /home (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/jails on /usr/local/jails (zfs, local, noatime, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
pool1/backups/server on /zroot (zfs, local, noatime, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
pool1/backups/server/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
pool1/backups/server/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
pool1/backups/server/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
pool1/backups/server/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
pool1/backups/server/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
pool1/backups/server/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
pool1/backups/server/home on /home (zfs, local, noatime, nfsv4acls)
zroot/home/user on /home/user (zfs, local, noatime, nfsv4acls)
pool1/backups/server/var/mail on /var/mail (zfs, local, nfsv4acls)
pool1/backups/server/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
pool1/backups/server/home/user on /home/user (zfs, local, noatime, nfsv4acls)
pool1 on /pool1 (zfs, local, nfsv4acls)
pool1/backups/server/jails on /usr/local/jails (zfs, local, noatime, nfsv4acls)
zroot/jails/media on /usr/local/jails/media (zfs, local, noatime, nfsv4acls)
zroot/jails/template on /usr/local/jails/template (zfs, local, noatime, nfsv4acls)
pool1/backups/server/jails/media on /usr/local/jails/media (zfs, local, noatime, nfsv4acls)
pool1/backups/server/jails/template on /usr/local/jails/template (zfs, local, noatime, nfsv4acls)
pool1/backups on /pool1/backups (zfs, local, nfsv4acls)
pool1/backups/server/jails/containers on /usr/local/jails/containers (zfs, local, noatime, nfsv4acls)
zroot/jails/containers on /usr/local/jails/containers (zfs, local, noatime, nfsv4acls)
pool1/test-backup on /zroot (zfs, local, noatime, nfsv4acls)
pool1/videos on /pool1/videos (zfs, NFS exported, local, nfsv4acls)
pool1/books on /pool1/books (zfs, local, nfsv4acls)
devfs on /usr/local/jails/containers/blog/dev (devfs)
/pool1/videos on /usr/local/jails/containers/theater/videos (nullfs, local, noatime)
devfs on /usr/local/jails/containers/theater/dev (devfs)
devfs on /usr/local/jails/containers/adblocker/dev (devfs)

root@server /home/tomi # zfs get canmount  pool1/test-backup
NAME               PROPERTY  VALUE     SOURCE
pool1/test-backup  canmount  on        default
root@server /home/tomi # zfs get mounted  pool1/test-backup
NAME               PROPERTY  VALUE    SOURCE
pool1/test-backup  mounted   yes      -


And the test backup I did has it's dataset also mounted, even though I didn't mount it.
I could manually backup zroot on another dataset on pool1 and after sending the snapshot the mountpoint also became /zroot.

Code:
# zfs snapshot -r zroot@test
# zfs send -R zroot@test | zfs pool1/test-backup
# zfs get mountpoint pool1/test-backup
# zfs get mountpoint pool1/test-backup
NAME               PROPERTY    VALUE       SOURCE
pool1/test-backup  mountpoint  /zroot      received

Code:
root@server /home/user # zfs get canmount  pool1/test-backup
NAME               PROPERTY  VALUE     SOURCE
pool1/test-backup  canmount  on        default
root@server /home/user # zfs get mounted  pool1/test-backup
NAME               PROPERTY  VALUE    SOURCE
pool1/test-backup  mounted   yes      -
root@server /home/user # mount | grep test
pool1/test-backup on /zroot (zfs, local, noatime, nfsv4acls)
 
maybe the backup procedure was upgraded recently, and somehow a bug got into that, so it restores snapshots instead of create new ones, or target and source was confused, or something like that...
Thank you for sharing your idea. I looked in the zfs-send() and the zfs send's documentation to see if something has changed but I could not spot anything.

What I'm wondering is why the dataset get mounted when receiving the snapshot stream. It doesn't sound like something that should happen (or happen implicitly) but I may misunderstood the objective of the procedure.
 
Hello, I had some time today to try to fix the issue.

For a reminder, my server backup dataset, located on a different pool than my system, seems to be mounted on top the server dataset.

So I booted on a live environment, in order to have neither pool or dataset mounted and tweak the backup dataset, through zfs properties, to prevent its mounting.

On the live system, after importing the backup pool, I did something like

Code:
# zfs set canmount=off pool1/backups/server

When rebooting on the system, the property was still set up accordingly and the dataset was not mounted (says zfs).

Code:
root@server /home/user # zfs get canmount pool1/backups/server
NAME                       PROPERTY  VALUE     SOURCE
pool1/backups/server  canmount  off       local

root@server /home/user # zfs get mounted pool1/backups/server
NAME                       PROPERTY  VALUE    SOURCE
pool1/backups/server  mounted   no       -

However, I could see with the mount that the dataset is mounted nonetheless.

Code:
# mount | grep pool1/backups/server
pool1/backups/server/jails on /usr/local/jails (zfs, local, noatime, nfsv4acls)
pool1/backups/server/jails/media on /usr/local/jails/media (zfs, local, noatime, nfsv4acls)
pool1/backups/server/jails/template on /usr/local/jails/template (zfs, local, noatime, nfsv4acls)
pool1/backups/server/var/mail on /var/mail (zfs, local, nfsv4acls)
pool1/backups/server/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
pool1/backups/server/home on /home (zfs, local, noatime, nfsv4acls)
pool1/backups/server/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
pool1/backups/server/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
pool1/backups/server/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
pool1/backups/server/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
pool1/backups/server/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
pool1/backups/server/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
pool1/backups/server/home/user2 on /home/user2 (zfs, local, noatime, nfsv4acls)
pool1/backups/server/jails/containers on /usr/local/jails/containers (zfs, local, noatime, nfsv4acls)
pool1/backups/server/home/user on /home/user (zfs, local, noatime, nfsv4acls)
pool1/backups/server/home/user3 on /home/user3 (zfs, local, noatime, nfsv4acls)
root@server /home/user #

Well, I'm clueless about what to do next. Do you have any idea that might help me?
 
Back
Top