Creating a snapshot of my new FreeBSD install

I have used the various root-on-ZFS so I can supposedly make a snapshot.

Reading the man page for creating snapshots got me a bit confused.

as the data on the active dataset changes, the snapshot consumes more data than would otherwise be shared with the active dataset

Reading that, it seems to make a backup of a newly installed system, I need to take a snapshot twice. Because if I only took one snapshot (let's call it fresh) any changes I make will be reflected on fresh (that is: my snapshot grows in size as well).

Can someone please explain? Thank you.
 
Assuming that you are using a root on ZFS installation. All you need to do is to create a recursive snapshot of your datasets and send it to a different backup machine.

[CMD=""]# zfs snapshot -r zroot@backup[/CMD]
[CMD=""]# zfs send -R zroot@backup | ssh user@otherhost "cat > zroot_backup.snap"[/CMD]

That's all it takes.
 
Can I then destroy the snapshot after the send command? I already made my backup of the system so I don't see much point keeping the snapshot active?

Thanks again. :)
 
I will use an example to show how zfs snapshot works.

After installing a shiny new 9.0 onto my pool named myzfs, I have the following:

Code:
$zfs list -t all
NAME                           USED  AVAIL  REFER  MOUNTPOINT
myzfs/root                     390M  2.80T   390M  none

Now I save the whole thing by taking a snapshot.

# zfs snapshot myzfs/root@20120529

Code:
$zfs list -t all
NAME                           USED  AVAIL  REFER  MOUNTPOINT
myzfs/root                     390M  2.80T   390M  none
myzfs/root@20120529               0      -   390M  -

Notice that the snapshot myzfs/root@20120529 is initially empty (indicated by the USED column). I make modification to the system by editing /boot/loader.conf and save the changes. As ZFS is a COW filesystem, the pool now looks like

Code:
$zfs list -t all
NAME                           USED  AVAIL  REFER  MOUNTPOINT
myzfs/root                     391M  2.80T   390M  none
myzfs/root@20120529              1M      -   390M  -

Now you see that the snapshot has grown from 0MB to 1MB and the total space used by myzfs/root increases by 1MB. (Okay, the numbers are made up here.) As more and more files are changed, you would see that the space used by myzfs/root@20120529 increases.

If you messed up badly, you could always go back to the beginning by

# zfs rollback myzfs/root@20120529

Voila, now we are back to square one.

Code:
$zfs list -t all
NAME                           USED  AVAIL  REFER  MOUNTPOINT
myzfs/root                     390M  2.80T   390M  none
myzfs/root@20120529               0      -   390M  -

Combining various ZFS commands ingeniously, you could get many useful features. One of them is boot environments.

So you should not delete the snapshot unless you are short of space or you get a better reference point.
 
Note that the data created with zfs send should not be stored into a file, it's only meant to be sent to a zfs receive process. If there's even a one flipped bit in the file for whatever reason the whole file could be unusable, there is no redundancy like the commonly used archive file formats have.
 
I decided to install FreeBSD into a ZFS pool. My system has the following setup:
  1. sys (mounted as legacy)
  2. sys/tmp (mounted as /tmp and the rest would be similar)
  3. sys/usr
  4. sys/var
  5. sys/home
I then decided to run on first boot [cmd=]# zfs snapshot sys@new[/cmd]

I then did the following steps:
  1. install cvsup-without-gui via packages
  2. download /usr/src (I didn't install these during installation)
  3. download /usr/ports (same reason as no. 2)
The disk usage has increased by 800 to 900MB. I decided to play with rollback so I ran [cmd=]# zfs rollback sys@new[/cmd]

Well maybe my understanding is wrong but /usr/src is not empty like expected it to be after a rollback.

Maybe if I wanted /usr to be rolled back I should've taken a snapshot of sys/usr?

Thank you. :)
 
I guess I have the thought that if I created a snapshot for the "parent", this will include the "children" too. :)
 
papelboyl1 said:
I guess I have the thought that if I created a snapshot for the "parent", this will include the "children" too. :)

It can, but you have to be specific about it:
# zfs snapshot [b]-r[/b] poolname/fsname

/Sebulon
 
Sebulon said:
It can, but you have to be specific about it:
# zfs snapshot [b]-r[/b] poolname/fsname

/Sebulon

Sorry to be asking again :)

But in my setup if I want a snapshot of sys and anything underneath it (e.g. sys/tmp, sys/usr and etc), I would have to run [cmd=]#zfs snapshot -r sys@new[/cmd]

How about for rolling it back? Would a basic rollback command work (basic as in no options passed)? I'm looking at the manpage for zfs rollback and I don't see any options what is similar to the -r for zfs snapshot.

Thanks again.
 
papelboyl1 said:
How about for rolling it back? Would a basic rollback command work (basic as in no options passed)? I'm looking at the manpage for zfs rollback and I don't see any options what is similar to the -r for zfs snapshot

There is currently no recursive function for zfs rollback so you have to roll them back one by one, which isn´t as tedious as it sounds, even if you have thousands of filesystems, cause you can go something like:
# zfs list -t snapshot -o name | grep @snapshotname | sed 's/^/zfs rollback [i](-rRf)[/i]/' > rollbacklist; chmod 700 rollbacklist; ./rollbacklist; rm rollbacklist

And have them rolled back in one big swoop.

/Sebulon
 
Back
Top