ZFS zfs receive: "cannot receive incremental stream: destination ... has been modified"

cracauer@

Developer
I have a backup disk with one ZFS pool and a number of filesystems that should receive incremental snapshots.

For some but not all filesystems it refuses with "cannot receive incremental stream: destination ... has been modified". I am sure that I did no such modification, this is purely a backup disk that has never been used to do any work on it. Did this happen to anybody else? What's behind this? Please tell me it is not something silly such as atime having been changed by a find(1) job?

Anyway, assuming this can't be fixed, my only alternative is to send a non-incremental snapshot, which requires to remove the entire target subsystem. Apart from being slow this also has the disadvantage of losing all previously existing snapshots in the backup disk.

Any ideas what the best way to deal with this is?
 
For backup (generated by zfs recv; updated via zfs recv) ZFS filesystems/volumes, I always set the zfs readonly property to on to avoid anything accidentally modifying (including metadata like atime) the filesystem. Additionally, if you don't need the backup mounted, you can just leave it unmounted for another layer of prevention. (Recv with -u, and additionally change the canmount property on the backup filesystems to noauto.)

You can do this because zfs readonly property only prevents changes via the posix layer (i.e. via directories and files), but not updating the filesystem via a subsequent zfs recv. With this set, I've never run into "destination modified" issues. (Which I previously had experienced on filesystems that were mounted and accitentally received meta-data updates.)

That said, you can use zfs-diff(8) to verify what it thinks has changed, and if it is truly a backup filesystem with no changes you wish to keep, use zfs-rollback(8) to return to the latest sent snapshot — the updating incremental send's starting snapshot. When I've run into this issue with accidental updates to a filesystem, this has successfully gotten the system back to a state where it is ready to receive the incremental updates.

N.B.: While there is a flag (-F) to use with zfs-recv(8) that will automatically do this rollback (to be able to perform the incremental receive), it has additional side effects, so I never recommend it. Far better to manually rollback the impacted systems this time, and adjust (readonly=on) properties to prevent the issue, than to bring out the -F hammer.
 
Thank you, Eric. That all makes sense. I wonder why I didn't come across this problem with other arrays, maybe I had atime off.
 
I always set the zfs readonly property to on
+1
...and canmount="noauto" and *definitely* no imported mountpoints (i.e. they are inherited from the pool/path).
I usually set readonly and canmount at the root dataset that holds backups and the user (usually 'backup') that runs the zfs send|recv backup scripts can't import/change those (most) zfs properties.

(learned some of this the hard way...)
 
Thank you, Eric. That all makes sense. I wonder why I didn't come across this problem with other arrays, maybe I had atime off.
I've noticed that is a zfs dataset (eg. pri_zp/Z1) contains a "sub" zfs dataset (eg. pri_zp/Z1/Z00) in which replication has previously (to a `sec_zp/Z1`) worked, then if you add (and `zfs send` to ) a fresh "sub" zfs dataset (eg. pri_zp/Z1/Z99) then `sec_zp/Z1` sill become pickled on selinux.

cf. https://unix.stackexchange.com/ques...-zp-z1-z99-future-to-pri-zp-z1-and-resuming-r

I'm currently looking for a work around. ??
 
pro tip: on newer (FreeBSD 13+) zfs-recv(8), you can zfs recv -o canmount=noauto ... to set it automatically during (initial) receives.
You can also use -x instead of -o which can be undone with -b on the send that restores the data. Doing so lets you change settings on the initial source and on the backup independently of each other and know that your next restore puts it back to how it was before backup.

As for the initial problem I agree with setting readonly but you can have zfs recv force a rollback when you don't need any filesystem changes.
 
"cannot receive incremental stream: destination ... has been modified"
Just want to share an anecdote about this.
Not sure about now, but in the past it was possible to get a modified received filesystem by simply mounting it.
No further accesses were involved, just mounting was enough.
The culprit was the so called unlinked list where ZFS stored IDs of files that got unlinked but were still opened.
If a system crashed (ungracefully rebooted, in general) then ZFS would know which files had to be removed because they were not linked into the filesystem tree.

So, it turned out, it was possible to receive a filesystem with a non-empty unlinked list if the list was not empty at the send side at the send time.
When the received filesystem got mounted, the unlinked list got processed and some files got removed.
Thus, the filesystem got modified, although logically it didn't diverge from the original.
 
Just want to share an anecdote about this.
Not sure about now, but in the past it was possible to get a modified received filesystem by simply mounting it.
No further accesses were involved, just mounting was enough.
The culprit was the so called unlinked list where ZFS stored IDs of files that got unlinked but were still opened.
If a system crashed (ungracefully rebooted, in general) then ZFS would know which files had to be removed because they were not linked into the filesystem tree.

So, it turned out, it was possible to receive a filesystem with a non-empty unlinked list if the list was not empty at the send side at the send time.
When the received filesystem got mounted, the unlinked list got processed and some files got removed.
Thus, the filesystem got modified, although logically it didn't diverge from the original.

That is still a thing. I don't think it needs an ungraceful shutdown, you can snapshot at the wrong time and then send a snapshot with the lists included.

I suppose you can never do an incremental snapshot send/receive when that problem hits.
 
I have got the same 'cannot receive incremental stream: destination... has been modified' error after restoring a snapshot yesterday that was taken earlier in the day. Later snapshots were taken by Zrepl before I realised that I needed to rollback a filesystem.

I didn't rollback using send/recv as there were sufficient snapshots on the local machine. However, I have noticed this morning that the host that was successfully restored now cannot backup the same filesystem.

Do I have to destroy all of the snapshots on both sender and receiver to get this working again? I hope not as I don't want to lose the snapshots.
 
Back
Top