ZFS Snapshot progress check

Not sure exactly what you mean, but if you are talking about issuing the "zfs snapshot" command, the snapshot should be created when the command returns.
If you are talking about progress during a zfs send try adding a "-v" or "--verbose"
 
Guys is there a way to check the progress of a snapshot Or any other way to know when it is done?
zfs-concepts(7):
Code:
 Snapshots
     A snapshot	is a read-only copy of a file system or	volume.	 Snapshots can
     be	created	extremely quickly, [...]
So, I'd think when you get a success return value, you're done.

As mer stated: not sure exactly what you mean. I'm guessing you have a different idea of how the creation of a snapshot is implemented (a snapshot deletion is definitely different). Perhaps have a look at:
 
zfs-concepts(7):
Code:
 Snapshots
     A snapshot    is a read-only copy of a file system or    volume.     Snapshots can
     be    created    extremely quickly, [...]
So, I'd think when you get a success return value, you're done.

As mer stated: not sure exactly what you mean. I'm guessing you have a different idea of how the creation of a snapshot is implemented (a snapshot deletion is definitely different). Perhaps have a look at:

So something like this should work as expected?
Bash:
#!/usr/bin/env bash

# Vars
_date=`date "+%Y-%m-%d"`
_disk_dest="/tmp/bkp"
_bkp_dest="$_disk_dest/$_date"
_pool="tank1"
_snap="zhome@`hostname`_$_date"
_hbase="tank1/zhome@base-FreeBaSeD-T430_2023-06-30"

# Create directory destination
mkdir -p "${_bkp_dest}/zhome"

# Functions
bkp_files(){

    zfs snapshot "${_pool}"/"${_snap}" && \
    zfs send -I "${_hbase}" \
    tank1/"${_snap}" | \
    xz -9 > /mnt/bkp/"${_date}"/zhome.xz

}

...
 
You're taking a snapshot of your home dataset and zfs send to a file (piping through xz)? If you add "-v" or "--verbose" to the xzf send you can get the progress of how much has been sent to the pipe.
Without dragging out reference stuff I think it should work as long as permissions are correct or you are running as root.
I would try the commands by hand first before tossing them into a script.
 
So something like this should work as expected?
Bash:
#!/usr/bin/env bash

# Vars
_date=`date "+%Y-%m-%d"`
_disk_dest="/tmp/bkp"
_bkp_dest="$_disk_dest/$_date"
_pool="tank1"
_snap="home@`hostname`_$_date"
_hbase="tank1/zhome@base-FreeBaSeD-T430_2023-06-30"

# Create directory destination
mkdir -p "${_bkp_dest}/zhome"

# Functions
bkp_files(){

    zfs snapshot "${_pool}"/"${_snap}" && \
    zfs send -I "${_hbase}" \
    tank1/"${_snap}" | \
    xz -9 > /mnt/bkp/"${_date}"/zhome.xz

}

...
Try running it with bash -x to review the built commands, but it looks like your send will be -I tank1/zhome@base… tank1/home@.. ( where zhome!=home, and the send will fail.)
 
You mean xz?

Try running it with bash -x to review the built commands, but it looks like your send will be -I tank1/zhome@base… tank1/home@.. ( where zhome!=home, and the send will fail.)
Thank you, I fixed the var. But my question is regarding the &&, if the zfs send would wait for the zfs snapshot to finish in the background first. And I say background because after I run the snapshot command, I usually check with zfs list -t snapshot and the size of the be usually takes a few minutes to reach the real size and stops grow.
 
[...] But my question is regarding the &&, if the zfs send would wait for the zfs snapshot to finish in the background first. And I say background because after I run the snapshot command, I usually check with bectl list and the size of the be usually takes a few minutes to reach the real size and stops grow.
Based on 3.2.4 Lists of Commands:
Rich (BB code):
An AND list has the form

     command1 && command2

command2 is executed if, and only if, command1 returns an exit status of zero (success).
I (not a bash expert) would say that is exactly as intended.

I don't quite understand your "background" remark. After a snapshot creation, bectl list only lists BEs, unless used in conjuction with the -s option—see bectl(8). What is your exact bectl command?
 
Based on 3.2.4 Lists of Commands:
Rich (BB code):
An AND list has the form

     command1 && command2

command2 is executed if, and only if, command1 returns an exit status of zero (success).
I (not a bash expert) would say that is exactly as intended.

I don't quite understand your "background" remark. After a snapshot creation, bectl list only lists BEs, unless used in conjuction with the -s option—see bectl(8). What is your exact bectl command?
Sorry, I meant zfs list -t snapshot, bectl it is a part of the second half of the sh (out of the scope of this thread).
 
You mean xz?


Thank you, I fixed the var. But my question is regarding the &&, if the zfs send would wait for the zfs snapshot to finish in the background first. And I say background because after I run the snapshot command, I usually check with zfs list -t snapshot and the size of the be usually takes a few minutes to reach the real size and stops grow.

When the snapshot command returns, the snapshot is created and available. It is a very small (fast; not “backgrounded”) operation to create a snapshot in ZFS.
 
As rootbert suggested, pv is probably the best way to monitor progress of zfs send.

And as Eric A. Borisch posted while I composed this, the snapshot is available immediately. No waiting.

Looking at your scrpt that xz -9 is going to slow things down a lot. Unless absolute maximum compression is required in real time, I'd consider a different approach (write output to a file, then either nohup and background xz, or scan for compression candidates from cron at 2am).

Here's a fragment of the code I use to send a snapshot of my tank to external storage:
Code:
EXTSN1=$(GetDiskSerialNumber $EXTDISK1 12)
gpart destroy -F $EXTDISK1
gpart create -s gpt $EXTDISK1
gpart add -t freebsd-zfs -l X0:$EXTSN1 $EXTDISK1
zpool create -f offsite /dev/gpt/X0:$EXTSN1
zfs set compression=lz4 offsite
zfs snapshot -r tank@replica
zfs unmount offsite
size=$(zfs send -nP -R tank@replica | grep "^size" | sed -e "s/size[ $TAB]*//")
IsPosNZint "$size" || Barf "bad send size for tank@replica1: \"$size\""
zfs send -R tank@replica | pv -s $size -ptebarT | zfs receive -Fdu offsite
zfs destroy -r tank@replica
zfs list -t snapshot
zpool export offsite
There's a few external references, but I think that they are fairly self-explanitory.

It illustrates how you can use zfs send -nP to get an estimate of the volume of data to be sent, and then pv to report on actual progress.
 
"And I say background because after I run the snapshot command, I usually check with zfs list -t snapshot and the size of the be usually takes a few minutes to reach the real size and stops grow."

I find this odd/interesting. The only reason a snapshot size should change is because blocks are migrating to because the source is changing, at least in my understanding because of the "Copy On Write" behavior.

snapshot something that has say 4 4MB files in it, total of 16MB. The size of the snapshot is relatively small, likely mostly metadata pointing to the original blocks.
If you delete one of those files, the size of the snapshot increases by 4MB (more or less), the current source now has 12MB. Create a new 4MB file, the current source has 16MB, the snapshot still has 4MB plus metadata pointing to the other 12MB that existed when the snapshot was created.

One of the best ways to see this is with Boot Environments. If you have a bunch due to freebsd-upgrades, do bectl list then start doing bectl destroy -o then bectl list and you can see the space migrate around and eventually get reclaimed.

It would be interesting to maybe do the snapshot command on it's own, put it in an if. That way if the zfs snapshot command is successful, you are sure the command has completed, everything inside the if gets executed then you could also add an else to echo an error message.
 
The 'USED' parameter of a snapshot shows the size of the blocks it uniquely references; this will change as the (active) filesystem has blocks modified / freed. See zfsprops(7) and zfsconcepts(7). If no blocks have been modified or released since the snapshot was created, used==0; as blocks are modified/freed in the live filesystem, if this is the only snapshot that captures that state, the used parameter will grow, so it's kind of a hard metric to pin down, as actions on other things (live filesystem, destruction of other snapshots) influences its value.

In terms of the size of data on disk that the snapshot refers to, that's found in the 'REFER' column of zfs list -t snapshots; you'll note that size never changes after creation.

edit: clarification that it is the size of the blocks, not the number of blocks.
 
I'm thinking about make a video showing what I'm talking about. It will give a better insight.
 
I'm thinking about make a video showing what I'm talking about. It will give a better insight.
I encourage you to review the replies above.
Your original question was effectively how long a snapshot takes. The answer is near-instantaneous.

Snapshots are ready for ‘use’ (e.g. zfs send) as soon as the zfs snapshot command returns. I leverage this fact hundreds of times a week at my $DAYJOB. (Create snapshot; immediately send snapshot.)
The ‘used’ parameter will fluctuate — even though the snapshot itself is static — as it related to the on-disk space required to keep that particular snapshot (saved point-in-time state) available. Shared blocks (with the active filesystem or other snapshots) are not counted against ‘used’. As such, watching the ‘used’ value stabilize tells you more about i/o behavior on the active filesystem, and precisely nothing about how “complete” the snapshot is. (The ZFS snapshot (including recursion) process is atomic. They either exist in a competed state, or not at all.)
 
Your original question was effectively how long a snapshot takes. The answer is near-instantaneous.
To be fair, I've seen it take several seconds on a very busy (and/or slow storage) pool. But still, once the `zfs snapshot` command returns with exit status 0, you're good to go. (I suppose it can fail if the pool is really full, or in read-only mode, but that should be a giant clue to go fix something)
 
To be fair, I've seen it take several seconds on a very busy (and/or slow storage) pool. But still, once the `zfs snapshot` command returns with exit status 0, you're good to go.

True; it depends on system load & complexity how soon (wall time) it completes, but there is no “in process” state where things can be modified on the filesystem during the snapshot. If you’ve asked for the snapshot, and the command returns 0, the snapshot exists and is immutable.

As zfs-snapshot(8) states:
All previous modifications by successful system calls to the file system are part of the snapshots.

Or “all completed IO” is either before (the snapshot call) or after the snapshot completes. (Unlike, for example, an rsync traversal of the filesystem that can’t make those guarantees from userspace.)
 
Back
Top