Setting up FreeBSD with Auto ZFS snapshots

jalla said:
Hardcoding this stuff in the script is not the way to go. You should factor out all the variables and keep the script itself generic IMO. As to simplicity, a single line in a config file to control all periodic snapshots, which filesystems, what time to snapshot, how many to keep, that's what I call simple.

When you deside to reinvent the wheel you should opt to improve it. No offense, but your "wheel" in this case looks distinctly square to me.

check out latest version in git repo
http://hg.bsdroot.lv/aldis/zfSnap/
 
Well, with my currnet setup I keep 4 weekly snapshots (taken sunday night), 7 nightly snapshots (taken at midnight), and a number of houly snaps (@10,@14,@16)
Tell me how can use you're script to do something similar.
 
jalla said:
Well, with my currnet setup I keep 4 weekly snapshots (taken sunday night), 7 nightly snapshots (taken at midnight), and a number of houly snaps (@10,@14,@16)
Tell me how can use you're script to do something similar.

oh... yesterday, at night a good idea hit my head :) I will finish it today, and then I will tell you [zfSnap.sh will be quite rewritten]
 
finished rewriting my script

jalla said:
Well, with my currnet setup I keep 4 weekly snapshots (taken sunday night), 7 nightly snapshots (taken at midnight), and a number of houly snaps (@10,@14,@16)
Tell me how can use you're script to do something similar.

every 2 hours take recursive snapshots of zpool/zfs1 zpool/zfs2, and not recursive snapshots of zpool/zfs3 andzpool/zfs4. keep these snapshots for 1 week
Code:
0 */2 * * * root /usr/local/bin/zfSnap.sh -a 1w -r zpool/zfs1 zpool/zfs2 -R zpool/zfs3 zpool/zfs4

same as above exept make montly snapshots, and keep zpool/zfs1 zpool/zfs2 zpool/zfs3 snapshots for one and half year. Keep zpool/zfs4 for year
Code:
0 0 1 * * root /usr/local/bin/zfSnap.sh -a 1y6m -r zpool/zfs1 zpool/zfs2 -R zpool/zfs3 -a 1y zpool/zfs4

Delete old snapshots every night at 2:00 am
Code:
0 2 * * * root /usr/local/bin/zfSnap.sh -d
I think making 1 entry for deleting old snapshots is better, than adding -d to entries above, because deleting snapshots is slower, than creating them.... and who cares is some snapshots will stay available few hours longer :)

So what do you think?

P.S.
for more info on how to use /etc/crontab see crontab(5)
 
Haven't tested the script, but as far as I can tell it should work as advertised. With a number of crontab entries it seems to support differing hourly/daily/weekly/whatever schedules. The idea of coding the ttl into the snapshot name and use that for recycling is smart.

Personally I use snapshots both for ufs and zfs so sysutils/freebsd-snapshot still works better for me.
 
jalla said:
Haven't tested the script, but as far as I can tell it should work as advertised. With a number of crontab entries it seems to support differing hourly/daily/weekly/whatever schedules. The idea of coding the ttl into the snapshot name and use that for recycling is smart.

Personally I use snapshots both for ufs and zfs so sysutils/freebsd-snapshot still works better for me.

Thanks for feedback.... :)

P.S.
I've submitted port http://www.freebsd.org/cgi/query-pr.cgi?pr=149188
Also writing wiki page now http://wiki.bsdroot.lv/zfsnap
 
tdb@ said:
Feature suggestion: a -n flag to say what it would do but without actually doing it.

committed to HEAD. Check if you like it, and how to improve it (Then I will submit update PR. lol)
 
killasmurf86 said:
committed to HEAD. Check if you like it, and how to improve it (Then I will submit update PR. lol)
http://aldis.git.bsdroot.lv/zfSnap/tree/zfSnap.sh

It works :)

But, I'd probably have implemented it a bit differently. By having the command in the dry_run block and in the non-dry_run block you run the risk of changing one but not the other. Maybe use a variable so they set in one place? Minor niggle though.

The other thing is your check for a valid ZFS filesystem. Would running "zfs list -H $1" and checking the output be more efficient? I have quite a few filesystems and snapshots, so "zfs list -H" takes a while to run. Again, a minor niggle since it only happens when doing a dry run.

Tim.
 
tdb@ said:
But, I'd probably have implemented it a bit differently. By having the command in the dry_run block and in the non-dry_run block you run the risk of changing one but not the other. Maybe use a variable so they set in one place? Minor niggle though.
commited to HEAD (won't make it to v1.1.7 Thanks for tip

tdb@ said:
The other thing is your check for a valid ZFS filesystem. Would running "zfs list -H $1" and checking the output be more efficient? I have quite a few filesystems and snapshots, so "zfs list -H" takes a while to run. Again, a minor niggle since it only happens when doing a dry run.

hmm I think if you have many zfs files systems (for example on some advanced server) doing zfs list -H once will be much faster

P.S.
Mod... do you think last few posts should be split into new thread?
 
killasmurf86 said:
hmm I think if you have many zfs files ystems (for example on some advanced server) doing zfs list -H once will be much faster

Code:
# /usr/bin/time zfs list -H
pool0   92.5G   39.4G   18K     legacy
...
        2.62 real         0.02 user         0.02 sys
Code:
# /usr/bin/time zfs list -H pool0
pool0   92.5G   39.4G   18K     legacy
        0.09 real         0.00 user         0.00 sys

Given you're only actually checking those given on the command line it seems like it'd be quicker to check individually.

But, I'd not worry since it's only done on a dry run, which you only do once or twice to test.
 
I've been running this script for a couple of days now and it's working great for me.

But I wonder, is there any way to back the deletes recursive? It takes some time to walk over all my filesystems deleting snapshots when they were all made quickly using recursion at the top level.

The only way I can think of to do it is by adding optional arguments to the delete flag which work the same as the normal usage. So I could run:

Code:
zfSnap -d -r pool

What do you think? Maybe you have a better idea?
 
tdb@ said:
I've been running this script for a couple of days now and it's working great for me.

But I wonder, is there any way to back the deletes recursive? It takes some time to walk over all my filesystems deleting snapshots when they were all made quickly using recursion at the top level.

The only way I can think of to do it is by adding optional arguments to the delete flag which work the same as the normal usage. So I could run:

Code:
zfSnap -d -r pool

What do you think? Maybe you have a better idea?

Good idea.... I will try it
 
implemented
anyone who is interested in testing
Code:
$ git clone -b zfs-destroy-recursive-snapshots http://aldis.git.bsdroot.lv/zfSnap

EDIT:
branch deleted
port (sysutils/zfsnap) updated to latest version
 
Back
Top