ZFS Confused about what is backedup by a zfs snapshot

I execute zfs list -r, I get

Code:
(18:18)ROOT@anthem:/root# zfs list -r
NAME USED AVAIL REFER MOUNTPOINT
zroot 22.1G 424G 88K /zroot
zroot/ROOT 4.01G 424G 88K none
zroot/ROOT/default 4.01G 424G 4.01G /
zroot/tmp 180K 424G 180K /tmp
zroot/usr 18.1G 424G 88K /usr
zroot/usr/home 136K 424G 136K /usr/home
zroot/usr/ports 17.4G 424G 17.4G /usr/ports
zroot/usr/src 680M 424G 680M /usr/src
zroot/var 740K 424G 88K /var
zroot/var/audit 88K 424G 88K /var/audit
zroot/var/crash 88K 424G 88K /var/crash
zroot/var/log 268K 424G 268K /var/log
zroot/var/mail 120K 424G 120K /var/mail
zroot/var/tmp 88K 424G 88K /var/tmp
(18:18)ROOT@anthem:/root# ls /
.cshrc .rnd bin dev etc lib media net rescue sbin tmp var
.profile COPYRIGHT boot entropy home libexec mnt proc root sys usr zroot
(18:18)ROOT@anthem:/root#
So why arn't the /bin /etc /lib /sbin /libexec /root directories listed in zfs list -r

According to the manual (19.4.5.1. Creating Snapshots), the above command should "Create a recursive snapshot of the entire pool:" Why aren't the directories /bin /etc /lib /sbin /libexec /root listed?

To further explore this, I took a snapshot. Then created the file touch /root/test.me. And to be sure I also made a copy of the file system dir >> dir.txt which resulted in a 17 meg file. I copied both files to /var. I then restored the snapshot and the files were still there. Shouldn't the files have been deleted by restoring the snapshot?

In confusion,

-JJ
 
I execute zfs list -r, I get

Code:
(18:18)ROOT@anthem:/root# zfs list -r
NAME USED AVAIL REFER MOUNTPOINT
zroot 22.1G 424G 88K /zroot
zroot/ROOT 4.01G 424G 88K none
zroot/ROOT/default 4.01G 424G 4.01G /
zroot/tmp 180K 424G 180K /tmp
zroot/usr 18.1G 424G 88K /usr
zroot/usr/home 136K 424G 136K /usr/home
zroot/usr/ports 17.4G 424G 17.4G /usr/ports
zroot/usr/src 680M 424G 680M /usr/src
zroot/var 740K 424G 88K /var
zroot/var/audit 88K 424G 88K /var/audit
zroot/var/crash 88K 424G 88K /var/crash
zroot/var/log 268K 424G 268K /var/log
zroot/var/mail 120K 424G 120K /var/mail
zroot/var/tmp 88K 424G 88K /var/tmp
(18:18)ROOT@anthem:/root# ls /
.cshrc .rnd bin dev etc lib media net rescue sbin tmp var
.profile COPYRIGHT boot entropy home libexec mnt proc root sys usr zroot
(18:18)ROOT@anthem:/root#
So why arn't the /bin /etc /lib /sbin /libexec /root directories listed in zfs list -r

According to the manual (19.4.5.1. Creating Snapshots), the above command should "Create a recursive snapshot of the entire pool:" Why aren't the directories /bin /etc /lib /sbin /libexec /root listed?

To further explore this, I took a snapshot. Then created the file touch /root/test.me. And to be sure I also made a copy of the file system dir >> dir.txt which resulted in a 17 meg file. I copied both files to /var. I then restored the snapshot and the files were still there. Shouldn't the files have been deleted by restoring the snapshot?

In confusion,

-JJ
These might help:

A thread from someone more confused:
 
This looks like the "Default" file layout if you selected zfs on install.. vs creating your own data sets..

as for the snapshots, you would need to list by type snapshot .. ie
zfs list -t snapshot

the proper format should be

Code:
titan/var/mail@2020-09-05_12.00.00--7d                                                               0      -    88K  -

anything before the @ is the data set, and anything after is the actual snapshot name..

if you looking for an easy reliable snapshot tool..
pkg install zfsnap

add cron tab
Code:
0       9       *       *       0-6     root    /usr/local/sbin/zfSnap -r -a 7d ZPOOLNAME

that will snapshot the entire pool everyday at 9am and remove them after a week .. you could change that to what ever you like.

as for your original question..

Code:
zroot/ROOT/default 4.01G 424G 4.01G /

your pool is mounted under / and covers everything that is not listed in the other defined datasets. Thus /etc/ and /bin would fall under the default zroot

Code:
zroot/tmp 180K 424G 180K /tmp
zroot/usr 18.1G 424G 88K /usr
zroot/usr/home 136K 424G 136K /usr/home
zroot/usr/ports 17.4G 424G 17.4G /usr/ports
zroot/usr/src 680M 424G 680M /usr/src
zroot/var 740K 424G 88K /var
zroot/var/audit 88K 424G 88K /var/audit
zroot/var/crash 88K 424G 88K /var/crash
zroot/var/log 268K 424G 268K /var/log
zroot/var/mail 120K 424G 120K /var/mail
zroot/var/tmp 88K 424G 88K /var/tmp

this is the standard list of data sets and their mount points as created at install..

someone can correct me, but I believe this is done so that you can easily replicate installs from one machine to another if needed .. I do know the default datasets was carefully decided by zfs leadership.. as always your free to slice up the pool anyway you like tho.

as for the content of a snapshot .. at the highest level, it contains the literal blocks of data that changed between the current time/date and the last snapshot ..

for example you could have a 20TB pool, and a snapshot may only be a few gigs .. however it will contain all of the information to allign the pool .. this is VERY useful when doing offsite or machine to machine transfers as your not paying to resend 20TB or even the 200 gigs of new files.

typically you would install a pool.. then replicate the pool to another machine.. then create a zfs send/recieve tab to send and apply snapshots .. this will keep the other machine updated with new changes at a regular time..
 
I'm not sure the following reply will be useful in de-confusing you, but I'll try:

I execute zfs list -r, I get
...
So why arn't the /bin /etc /lib /sbin /libexec /root directories listed in zfs list -r
Because they are just directories in the root directory, which is a ZFS dataset (called zroot/ROOT/default).

Are you familiar with traditional file systems? For the purpose of this discussion, ZFS datasets are sort of like file systems. There is a root file system, always mounted at "/", thence the name. Unless otherwise specified, every file system object (file or directory) underneath "/" will be in the root file system. Now, people usually create other file systems, and put then on mount points. ZFS does the same thing. In your above example, there is a separate file system called zroot/tmp, which is mounted at /tmp, so all the files in the tmp directory (like /tmp/foo) will be in the zroot/tmp file system (or dataset if you want to use ZFS names).

But if you look carefully, you see that /bin, /etc and all the ones you listed above are not separate mount points, nor in any of the other mount points = separate file systems or data sets. So therefore, they are part of the root file system.

According to the manual (19.4.5.1. Creating Snapshots), the above command should "Create a recursive snapshot of the entire pool:" Why aren't the directories /bin /etc /lib /sbin /libexec /root listed?
What command? And what was its output?

However, the explanation is likely very simple: /bin and so on are simply part of the root file system, and were snapshotted as part of that snapshot.

To further explore this, I took a snapshot. Then created ... I copied both files to /var. I then restored the snapshot and the files were still there.
When you say "you restored the snapshot", what do you mean? If you had looked in the snapshot that was created after you created these two new files in /var, they should not have been in the snapshot. But I don't understand what you mean by "restore" here.
 
Terminology. I do understand the zfs creates a snapshot of the file system. I does not backup the data / filesystem.

Well, I wrote I long explaination of what I'm trying to do. I read, re-read, re-read, and read some more. It came down to this: The snapshot represents the chaged data. I was thinking the snapshot was the static picture.

So when I want to do some development work, I take a snapshot. If things go south, I just roll back the snapshot. Nice!

Thank you all for your answers.

-JJ, not to soon to be a ZFS expert.
 
yes thats correct.. Its good practice to take regular snapshots.. they take a minimal amount of space and you can nuke them as needed ..

If your getting into ZFS and need some awesome resources.. you should check out the ZFS mastery books by MWL and AJ ... best 40$ you can spend.. https://mwl.io/nonfiction/os awesome dudes, awesome books ..
 
Okay, now for the snapshot restore portion of my question.

I got everything set up the way I wanted it and took a snapshot with zfs snapshot -r zroot@snapshot_090820_1457 Running zfs diff zroot/ROOT/default@snapshot_090820_1457 shows all the changes made to install mythtv. It bombed, as before. So now I want to put my data in the state it was in at 1457. I tried to execute zfs rollback zroot@snapshot_090820_1457 and nothing happened. So I did zfs diff zroot@snapshot_090820_1457 and nothing. Remembering from our conversation, I tried zfs diff zroot/ROOT/default@snapshot_090820_1457 and I got lots of data, showing my changes to install mythtv.

So my question, to do the snapshot rollback, do I simply execute zfs rollback zroot/ROOT/default@snapshot_090820_1457?

Thank you again.

-JJ
 
Last edited:
So my question, to do the snapshot rollback, do I simply execute zfs rollback zroot/ROOT/default@snapshot_090820_1457?
That should do it. If you need restore just a file, say mistakenly deleted,
1) set 'zfs set mountpoint=visible zroot/ROOT/default'
2) look inside the 'snapshot_090820_1457/.zfs' directory in the mountpoint (which should be / by default for zroot/ROOT/default).
 
Next Question:

Still reading. I did get the FreeBSD Mastery Advanced ZFS book. :) A little less confused. I also installed sysutils/zap The zap man page has helped a lot.

I took a snapshot zroot@snapshot_091120_1502. Everything I have done since taking the snapshot has worked out as required. So I no longer need the snapshot. From Read(ing) The Fine Manual, it says, "If the upgrade was successful, the snapshot can be deleted to free up space." So, zfs list -t snapshot gives me
Code:
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
zroot@snapshot_091120_1502 0 - 96K -
zroot/ROOT@snapshot_091120_1502 0 - 96K -
zroot/ROOT/default@snapshot_091120_1502 73.1M - 3.14G -
zroot/tmp@snapshot_091120_1502 152K - 172K -
zroot/usr@snapshot_091120_1502 0 - 96K -
(edited for brevity)

As, emphasis(!) I think, I have figured out that zfs destroy zroot@snapshot_091120_1502 would simply remove the snapshot, and all the changed data would remain. Is that correct?

Thank you always.

-JJ
 
As, emphasis(!) I think, I have figured out that zfs destroy zroot@snapshot_091120_1502 would simply remove the snapshot, and all the changed data would remain. Is that correct?
Yes.
And
Code:
zfs list -rt snapshot -s creation | xargs -n1 | zfs destroy -r
Would delete all snapshots.
 
I got an error:

Code:
(18:16)ROOT@anthem:/root# zfs list -rt snapshot -s creation | xargs -n1 | zfs destroy -r
missing dataset argument
usage:
destroy [-fnpRrv] <filesystem|volume>
destroy [-dnpRrv] <filesystem|volume>@<snap>[%<snap>][,...]
destroy <filesystem|volume>#<bookmark>

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow
xargs: /bin/echo: terminated with signal 13; aborting
(18:19)ROOT@anthem:/root#

Do I need the pipe command between xargs -n1 AND zfs destroy -r

-JJ
 
Try this:
Code:
zfs list -r -t snapshot -H -s creation | xargs -n 1 zfs destroy

There is no need for a pipe after xargs; and zfs list now refers to localhost
 
I got an error:

18:16)ROOT@anthem:/root# zfs list -rt snapshot -s creation | xargs -n1 | zfs destroy -r
missing dataset argument

The output from zfs list -rt snapshot -s creation | xargs -n1 would have each word on a separate line like this:
Code:
zroot@snapshot_091120_1502
0
-
96K
-
zroot/ROOT@snapshot_091120_1502
0
-
96K
-
Each line would then be fed to zfs destroy and the spurious lines cause your errors
You need to output just the snapshot names without the additional date by using zfs list -r -t snapshot -H -o name -s creation | xargs -n1 zfs destroy -r

I would not have expected it to be neccessary to sort the snapshots in creation order before destroying them so I expect you could get rid of -s creation from the command.

Unless I'm missing something your approach seems to be over complicated and the single command zfs destroy -r zroot@snapshot_091120_1502 should achieve what you need by deleting zroot@snapshot_091120_1502 and the corresponding snapshots of all filesystems underneath zroot.
 
The output from zfs list -rt snapshot -s creation | xargs -n1 would have each word on a separate line like this:
Code:
zroot@snapshot_091120_1502
0
-
96K
-
zroot/ROOT@snapshot_091120_1502
0
-
96K
-
Each line would then be fed to zfs destroy and the spurious lines cause your errors
You need to output just the snapshot names without the additional date by using zfs list -r -t snapshot -H -o name -s creation | xargs -n1 zfs destroy -r

I would not have expected it to be neccessary to sort the snapshots in creation order before destroying them so I expect you could get rid of -s creation from the command.

Unless I'm missing something your approach seems to be over complicated and the single command zfs destroy -r zroot@snapshot_091120_1502 should achieve what you need by deleting zroot@snapshot_091120_1502 and the corresponding snapshots of all filesystems underneath zroot.
It does the job and equally shows how much space is being freed. It comes very useful when one has GBs of snapshots to purge.
 
Try this:
Code:
zfs list -r -t snapshot -H -s creation | xargs -n 1 zfs destroy

There is no need for a pipe after xargs; and zfs list now refers to localhost

Well, it got the task done:
Code:
(17:39)ROOT@anthem:/var/log# zfs list -r -t snapshot -H -s creation
zroot@snapshot_091120_1502 0 - 96K -
zroot/ROOT@snapshot_091120_1502 0 - 96K -
zroot/ROOT/default@snapshot_091120_1502 73.1M - 3.14G -
zroot/tmp@snapshot_091120_1502 152K - 172K -
zroot/usr@snapshot_091120_1502 0 - 96K -
zroot/usr/home@snapshot_091120_1502 340K - 5.79M -
zroot/usr/ports@snapshot_091120_1502 0 - 712M -
zroot/usr/src@snapshot_091120_1502 0 - 726M -
zroot/var@snapshot_091120_1502 0 - 96K -
zroot/var/audit@snapshot_091120_1502 0 - 96K -
zroot/var/crash@snapshot_091120_1502 0 - 96K -
zroot/var/log@snapshot_091120_1502 272K - 572K -
zroot/var/mail@snapshot_091120_1502 160K - 192K -
zroot/var/tmp@snapshot_091120_1502 64K - 96K -
(17:42)ROOT@anthem:/var/log# zfs list -r -t snapshot -H -s creation | xargs -n 1 zfs destroy
cannot open '0': dataset does not exist
cannot open '-': dataset does not exist
cannot open '96K': dataset does not exist
--- edited for brevity ---
However, it did got the job done.

Code:
(17:42)ROOT@anthem:/var/log# zfs list -t snapshot
no datasets available
(17:45)ROOT@anthem:/var/log#
I will adopt your suggestion for naming my snapshots with the pattern YYMMDD-HHMM, great idea.

Thank you.

-JJ
 
Well, it got the task done:

But you've ended up using a large sledgehammer to crack a small nut, resulting in a lot of harmless but confusing error messages.

The output from zfs list -r -t snapshot -H -s creation | xargs -n 1 will be passed to zfs destroy one 'word' at a time. So for the first snapshot 'zroot@snapshot_091120_1502 0 - 96K -' 5 items will be sent to individual invocations of zfs destroy:

'zroot@snapshot_091120_1502' The snapshot will be deleted
'0' This results in the error message "cannot open '0': dataset does not exist
'- ' and the error message "cannot open '-': dataset does not exist"
'96K' and the error message "cannot open '96K': dataset does not exist"
'- ' and the error message "cannot open '-': dataset does not exist"

... and so on for each snapshot.

To recursively delete the entire named snapshot it's much simpler to use zfs destroy -r zroot@snapshot_091120_1502 as in the following example

Code:
curlew:/root# zfs snapshot -r home@test

curlew:/root# zfs list -r -t snapshot home | grep @test

home@test                                                   0      -    31K  -
home/DATA@test                                              0      -    34K  -
home/DATA/home@test                                      672K      -  97.2G  -
home/DATA/home/camera@test                                  0      -  2.41G  -
home/DATA/home/db@test                                      0      -    23K  -
home/DATA/home/photos@test                                  0      -  10.4G  -
home/DATA/root@test                                     16.5K      -   319M  -
home/DATA/var@test                                          0      -    31K  -
home/DATA/var/cache@test                                    0      -    31K  -
home/DATA/var/cache/pkg@test                                0      -   263K  -
home/DATA/var/db@test                                       0      -    31K  -
home/DATA/var/db/mysql@test                                 0      -  1.31G  -
home/DATA/var/log@test                                  39.5K      -   260M  -
home/NOBACKUP@test                                          0      -    31K  -
home/NOBACKUP/nobackup@test                                 0      -   201G  -
home/NOBACKUP/usr@test                                      0      -    31K  -
home/NOBACKUP/usr/ports@test                                0      -    31K  -
home/NOBACKUP/usr/ports/distfiles@test                      0      -   292M  -
home/NOBACKUP/usr/ports/packages@test                       0      -  21.3M  -
home/NOBACKUP/vm@test                                       0      -  12.8G  -
home/NOBACKUP/vm/win10@test                                 0      -  8.90G  -
home/NOBACKUP/vm/win10a@test                                0      -  9.66G  -
home/NOBACKUP/vm/win7@test                                  0      -  69.7M  -
home/pot@test                                               0      -    23K  -
home/pot/bases@test                                         0      -    23K  -
home/pot/cache@test                                         0      -    23K  -
home/pot/fscomp@test                                        0      -    23K  -
home/pot/jails@test                                         0      -    23K  -

curlew:/root# zfs destroy -r home@test

curlew:/root# zfs list -r -t snapshot home | grep @test

curlew:/root#

On the other hand, if you wanted to delete every snapshot in all your filesystems then you could use zfs list -t snapshot -H -o name | xargs -n 1 zfs destroy. Note the use of -o name to make zfs list output just the name without any of the other properties.
 
Note: always invoke such commands with the -n (dry-run) switch 1st. Carefully watch out for cloned datasets that must be promoted before their parent can be destroyed.
 
I manage to kill the computer, I got it back with a hard reboot. Oops. I decided to upgrade my hard disk to a 1G SSD. I like the idea of doing a dry-run. There is a disadvantage to using the "Big Hammer." LOL I will chalk this up to one of those never use that hammer. Kinda like that rm command in the root directory with the rf switch. Live and learn.

-JJ
 
But you've ended up using a large sledgehammer to crack a small nut, resulting in a lot of harmless but confusing error messages.
Isn't Boot process on a Unix machine confusing to a Windows user?

What makes the command earlier given (-r with -s creation) different from yours (-o name)? Option/Attribute/Argument. Yet, it's the same command - zfs list...

We live in the fearmongering era; not surprised one is sledgehammer. It all depends on what we individually use how machines for. It's all personal choices. For some, they have very huge datasets that it takes close to a second or more for one to be deleted. For them, they see 10GBs and more per dataset. Hence, using -s creation shows them every information they need including each size. And so does -r, which is required in our case.
 
What makes the command earlier given (-r with -s creation) different from yours (-o name)? Option/Attribute/Argument. Yet, it's the same command - zfs list...
Considering the arguments applied to zfs list -r -t snapshot -H -s creation we have:

-t snapshot This will select every snapshot in the dataset unless further information is supplied to restrict it.
-r Recursively display any children of the dataset. Since we are selecting all snapshots this option is redundant and can be omitted.
-H Do not print headers and separate all fields by a single tab instead of arbitrary white space. Useful when piping the results into another command.
-s creation Sort the list of snapshots in order of creation date. This is not needed here, the order in which snapshots are deleted is unimportant.

The above command does indeed produce a list of snapshots to be passed to zfs delete but, as I've explained earlier, it also passes additional information about space used, space available, space referenced and mountpoint which zfs destroy reports as errors.

My suggested command zfs list -t snapshot -H -o name omits the unnecessary -r and -s arguments and includes -o name to instruct zfs list to display only the snapshot name instead of the default list of properties
 
Back
Top