ZFS snapshot managment

Not sure if this should go in a different location - please move if necessary!

I'm using ZFS (obviously) and set up zfs-snapshot-mgmt. My config is as follows:

Code:
snapshot_prefix: auto-
filesystems:
  tank/Backups:
    recursive: true
    creation_rule:
      at_multiple: 1440
      offset: 120
    preservation_rules:
      - { for_minutes: 47520, at_multiple: 1440, offset:  120 }

Based on the manpage this should create snapshots once a day (1440 minutes) at 2AM based on the 120 minute offset from midnight. When I let this run for a couple days I get this when I list snapshots:

Code:
tank/Backups@auto-2011-08-14_20.00
tank/Backups@auto-2011-08-15_20.00

I can determine any pattern that makes sense with that. 20:00 isn't 120 minutes from midnight or any obvious time. Can anyone explain what's happening?
 
Hey,

I have actually noticed something similar. My config looks like this:
Code:
snapshot_prefix: auto-
filesystems:
  pool1/root:
    # Create snapshots recursively for all filesystems mounted under this one
    recursive: true
    # Create snapshots every 60 minutes, starting at midnight
    creation_rule:
      at_multiple: 60
      offset: 0
    preservation_rules:
      - { for_minutes:   300, at_multiple:    60, offset: 0 }
      - { for_minutes: 10080, at_multiple:  1440, offset: 0 }
      - { for_minutes: 40320, at_multiple: 10080, offset: 0 }

So the daily and weekly should be created at 00:00 but;
Code:
# zfs list -t snapshot
NAME                                                           USED  AVAIL  REFER  MOUNTPOINT
pool1/root@auto-2011-08-04_02.00                              3.84M      -   303M  -
pool1/root@auto-2011-08-10_02.00                               287K      -   300M  -
pool1/root@auto-2011-08-11_02.00                               267K      -   300M  -
pool1/root@auto-2011-08-12_02.00                               258K      -   300M  -
pool1/root@auto-2011-08-13_02.00                               256K      -   300M  -
pool1/root@auto-2011-08-14_02.00                               240K      -   300M  -
pool1/root@auto-2011-08-15_02.00                               242K      -   300M  -
pool1/root@auto-2011-08-16_02.00                               137K      -   300M  -
pool1/root@auto-2011-08-16_03.00                               137K      -   300M  -
pool1/root@auto-2011-08-16_04.00                              65.1K      -   300M  -
pool1/root@auto-2011-08-16_05.00                               128K      -   300M  -
pool1/root@auto-2011-08-16_06.00                               137K      -   300M  -
pool1/root@auto-2011-08-16_07.00                               137K      -   300M  -
Instead, they are created at 02:00. I thought I was doing something wrong at first, but this worked as intended before upgrading to ZFS V28. Also, date and time is correct, so it´s not that.

/Sebulon
 
poh-poh,

I think you might be right. At first, I wasn't seeing how the time matched up with a UTC offset properly, but if I'm doing the numbers right it does make sense.

I'm in the NorthAm/MDT time zone which is -06:00 from UTC. Ignoring my zfs-snapshot-mgmt offset of 120 mins that means my snapshots would take place at midnight UTC which would be 18:00 localtime. Add 2 hours for my offset and you have the 20:00.

Is there a different Ruby function that would return a localtime entry instead? I don't mind putting in a 360 min offset to correct my issue, but what happens with daylight savings? How difficult would it be to put a config option in with timezone? I'm not very familiar with Ruby.
 
Sylgeist said:
How difficult would it be to put a config option in with timezone? I'm not very familiar with Ruby.
Neither me, timezone can be specified in TZ or /etc/localtime, see tzset(3).
Code:
Index: sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt
===================================================================
RCS file: /a/.csup/ports/sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt,v
retrieving revision 1.1
diff -u -p -r1.1 patch-zfs-snapshot-mgmt
--- sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	11 Jan 2010 03:42:14 -0000	1.1
+++ sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	17 Aug 2011 18:13:47 -0000
@@ -1,5 +1,14 @@
---- zfs-snapshot-mgmt~
-+++ zfs-snapshot-mgmt
+--- zfs-snapshot-mgmt.orig	2011-08-17 00:15:42.801630764 +0400
++++ zfs-snapshot-mgmt	2011-08-17 00:20:17.751629671 +0400
+@@ -44,7 +44,7 @@ class Rule
+   def initialize(args = {})
+     args = { 'offset' => 0 }.merge(args)
+     @at_multiple = args['at_multiple'].to_i
+-    @offset = args['offset'].to_i
++    @offset = args['offset'].to_i - $utc_offset
+   end
+ 
+   def condition_met?(time_minutes)
 @@ -154,7 +154,11 @@ class FSInfo
    end
  
@@ -12,3 +21,11 @@
    end
  
  private
+@@ -194,6 +198,7 @@ class Config
+ 
+ end
+ 
++$utc_offset = Time.now.utc_offset / 60
+ config_yaml = File.open(CONFIG_FILE_NAME).read(CONFIG_SIZE_MAX)
+ die "Config file too long" if config_yaml.nil?
+ config = Config.new(YAML::load(config_yaml))
 
@poh-poh

Do you mean that we should paste that into e.g. ~/zfs-snapshot-mgmt.patch and then:
[cmd=]# patch /usr/local/bin/zfs-snapshot-mgmt ~/zfs-snapshot-mgmt.patch[/cmd]
?

And shouldn´t this:
Code:
+@@ -199,6 +203,7 @@ die "Config file too long" if config_yam
really say this instead:
Code:
+@@ -199,6 +203,7 @@ die "Config file too long" if config_yam[B]1[/b]
?

/Sebulon
 
@poh-poh

here's what cron has been telling me all night:)
Code:
/usr/local/bin/zfs-snapshot-mgmt:47:in `+': nil can't be coerced into Fixnum (TypeError)
       from /usr/local/bin/zfs-snapshot-mgmt:47:in `initialize'
       from /usr/local/bin/zfs-snapshot-mgmt:118:in `new'
       from /usr/local/bin/zfs-snapshot-mgmt:118:in `initialize'
       from /usr/local/bin/zfs-snapshot-mgmt:188:in `new'
       from /usr/local/bin/zfs-snapshot-mgmt:188:in `initialize'
       from /usr/local/bin/zfs-snapshot-mgmt:25:in `map'
       from /usr/local/bin/zfs-snapshot-mgmt:187:in `each'
       from /usr/local/bin/zfs-snapshot-mgmt:187:in `map'
       from /usr/local/bin/zfs-snapshot-mgmt:187:in `initialize'
       from /usr/local/bin/zfs-snapshot-mgmt:203:in `new'
       from /usr/local/bin/zfs-snapshot-mgmt:203
So that's was a no go. I've reverted it again now.

/Sebulon
 
@poh-poh

It's quite alright, I don't mind.
But I can't follow the link you posted. I lack the necessary privilege, apparently.

/Sebulon
 
@poh-poh

I'm stuck. What am I missing?

Code:
# cd /usr/ports/sysutils/zfs-snapshot-mgmt
# rm -rf /files/pa*
# make deinstall distclean
# cd
# patch -d /usr/ports -i ~/zfs-snapshot-mgmt.patch
Hmm...  Looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt
|===================================================================
|RCS file: /a/.csup/ports/sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt,v
|retrieving revision 1.1
|diff -u -p -r1.1 patch-zfs-snapshot-mgmt
|--- sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	11 Jan 2010 03:42:14 -0000	1.1
|+++ sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	17 Aug 2011 18:13:47 -0000
--------------------------
File to patch:
Or:
Code:
# cd /usr/ports/sysutils/zfs-snapshot-mgmt
# rm -rf files/pa*
# make deinstall distclean [B]install[/B]
# cd
# patch -d /usr/ports -i ~/zfs-snapshot-mgmt.patch
Hmm...  Looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt
|===================================================================
|RCS file: /a/.csup/ports/sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt,v
|retrieving revision 1.1
|diff -u -p -r1.1 patch-zfs-snapshot-mgmt
|--- sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	11 Jan 2010 03:42:14 -0000	1.1
|+++ sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	17 Aug 2011 18:13:47 -0000
--------------------------
File to patch:
Same same.

/Sebulon
 
@poh-poh

I'm sorry, did I have to save the original patch as well? I thought you ment that I should start over with the updated patch instead.

Code:
# make deinstall distclean
# touch files/patch-zfs-snapshot-mgmt
# cd
# patch -d /usr/ports -i ~/zfs-snapshot-mgmt.patch
# cd /usr/ports/sysutils/zfs-snapshot-mgmt
# make install
===>  License check disabled, port has not defined LICENSE
=> zfs-snapshot-mgmt-20090201.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch http://marcin.studio4plus.com/files/zfs-snapshot-mgmt-20090201.tar.gz
zfs-snapshot-mgmt-20090201.tar.gz             100% of 4903  B   93 kBps
===>  Extracting for zfs-snapshot-mgmt-20090201_1
=> SHA256 Checksum OK for zfs-snapshot-mgmt-20090201.tar.gz.
===>  Patching for zfs-snapshot-mgmt-20090201_1
===>  Applying FreeBSD patches for zfs-snapshot-mgmt-20090201_1
No file to patch.  Skipping...
1 out of 1 hunks ignored--saving rejects to Oops.rej
=> Patch patch-zfs-snapshot-mgmt failed to apply cleanly.
*** Error code 1

Stop in /usr/ports/sysutils/zfs-snapshot-mgmt.
/usr/ports/sysutils/zfs-snapshot-mgmt/work/zfs-snapshot-mgmt-20090201/Oops.rej
Code:
***************
*** 194,199 ****
  
  end
  
  config_yaml = File.open(CONFIG_FILE_NAME).read(CONFIG_SIZE_MAX)
  die "Config file too long" if config_yaml.nil?
  config = Config.new(YAML::load(config_yaml))
--- 198,204 ----
  
  end
  
+ $utc_offset = Time.now.utc_offset / 60
  config_yaml = File.open(CONFIG_FILE_NAME).read(CONFIG_SIZE_MAX)
  die "Config file too long" if config_yaml.nil?
  config = Config.new(YAML::load(config_yaml))

/Sebulon
 
Sebulon said:
# touch files/patch-zfs-snapshot-mgmt
No, the same way you update ports, e.g. tar.gz/portsnap/csup/cvs up/git/etc. or just grab the file from cvsweb.

This may be not obvious but you're supposed to do backups during testing. The ports tree before applying the patch
$ zfs snapshot tank/usr/ports@before_test
and automatic snapshots themselves
Code:
$ zfs list -Ht snapshot -o name |
sed -n '/@auto-/ { p; s|/|_|g; p; }' |
while read snapshot; read file; do
    zfs send -p $snapshot >/var/backups/$file
done
as they could be destroyed by wrong preservation rules.
 
@poh-poh

It works, it works, it flippin' works, look:
Code:
# zfs list -t snapshot
NAME                                                           USED  AVAIL  REFER  MOUNTPOINT
pool1/root@auto-2011-08-19_00.00                               126K      -   300M  -
pool1/root@auto-2011-08-19_03.00                               126K      -   300M  -
pool1/root@auto-2011-08-19_04.00                                  0      -   300M  -
pool1/root@auto-2011-08-19_05.00                               117K      -   300M  -
pool1/root@auto-2011-08-19_06.00                               126K      -   300M  -
pool1/root@auto-2011-08-19_07.00                               126K      -   300M  -
Do you think your patch could be merged into an update, or do I have to save and apply your patch every time there is an update for the application? Perhaps there are more people with this problem that could benefit from this?

/Sebulon
 
Just to inform others about my experiences on this issue. The patch by poh-poh isn't complete because it doesn't take daylight savings time into consideration.

The situation i bump into that caused me chasing this bug was i wanted to keep a snapshot in februari while i was running the tool in june. In the Netherlands june uses DST and februari doesn't. When running in june the global $utc_offset as proposed by poh-poh sets the global offset in june to 2 hours, while the offsite in februari should be 1 hour.

This patch calculates the offset for the day the snapshot was created, not for the day the tool was executed in case we are removing snapshots. Additionally, it stores the creation timestamp as soon as possible. In case we are creating a huge amount of snaphots, a second could have been passed between the initial call of zfs-snapshot-mgmt and the actual creation of snapshots, creating snapshots which are off by >= 1 seconds. The creation timestamp is now stored at the beginning and reused throughout the program.
Code:
--- /usr/local/bin/zfs-snapshot-mgmt    2017-04-30 11:45:53.000000000 +0200
+++ /usr/local/bin/zfs-snapshot-mgmt    2017-06-04 09:12:16.838555000 +0200
@@ -24,18 +24,20 @@
 require 'yaml'
 require 'time'

+$now_timestamp = Time.now;
+
 CONFIG_FILE_NAME = '/usr/local/etc/zfs-snapshot-mgmt.conf'
 CONFIG_SIZE_MAX = 64 * 1024     # just a safety limit

 def encode_time(time)
-  time.strftime('%Y-%m-%d_%H.%M')
+  time.strftime('%Y.%m.%d-%H.%M.%S')
 end

 def decode_time(time_string)
-  date, time = time_string.split('_')
-  year, month, day = date.split('-')
-  hour, minute = time.split('.')
-  Time.mktime(year, month, day, hour, minute)
+  date, time = time_string.split('-')
+  year, month, day = date.split('.')
+  hour, minute, second  = time.split('.')
+  Time.mktime(year, month, day, hour, minute, second)
 end

 class Rule
@@ -49,7 +51,7 @@

   def condition_met?(time_minutes)
     divisor = @at_multiple
-    (divisor == 0) or ((time_minutes - @offset) % divisor) == 0
+    (divisor == 0) or (((time_minutes+(Time.at(time_minutes*60).utc_offset / 60)) - @offset) % divisor) == 0
   end
 end

@@ -83,7 +85,7 @@
   end

   def self.new_snapshot(fs_name, snapshot_prefix)
-    name = snapshot_prefix + encode_time(Time.now)
+    name = snapshot_prefix + encode_time($now_timestamp)
     SnapshotInfo.new(name, fs_name, snapshot_prefix)
   end

@@ -202,7 +204,7 @@
 die "Config file too long" if config_yaml.nil?
 config = ZConfig.new(YAML::load(config_yaml))

-now_minutes = Time.now.to_i / 60
+now_minutes = $now_timestamp.to_i / 60

 # A simple way of avoiding interaction with zpool scrubbing
 busy_pools = config.busy_pools
I'm storing snapshots in the following format GTM-2017.06.04-10.54.00 so the prefix i use is GMT-
 
Last edited:
Back
Top