ZFS send -R but keep snapshots at receiving side

Hello all

I use the script found here, altered it to be hourly

http://www.aisecure.net/2012/01/11/automated-zfs-incremental-backups-over-ssh/

This work quit well, it creates, sends and destroy the snapshots on the sending side, just like I would.
I do not want to keep a lot of snapshots on the NAS itself.
We have a lot of data manipulation and snapshots grow fast.
But it will also destroy the snapshots on the receiving side, and that is something I do not want.
I want to keep the snapshots on the backup server.

I have searched, but can not find how to do it.
I believe it is the -R which destroys the snapshot at the remote side.

I know about the hold, but I want to keep it as simple as possible and I would like no scripting on the backup server to set the hold on the snapshots after a receive.

I use it for my samba zfs datasets.
And because every user has its own /usr/home/user dataset I need the children of /usr/home which is nasstore/samba/home to be send.

The final zfs command is
Code:
zfs send -R -i nasstore/samba@hourly-2012-03-07-01 nasstore/samba@hourly-2012-03-07-02 | ssh zfsmaster@zfs.backupserv.local zfs receive -Fuv nasbck/samba

Code:
******************************************************************************
attempting destroy nasbck/samba@hourly-2012-03-07-00
success
attempting destroy nasbck/samba/profiles@hourly-2012-03-07-00
success
attempting destroy nasbck/samba/data@hourly-2012-03-07-00
success
attempting destroy nasbck/samba/home@hourly-2012-03-07-00
success
attempting destroy nasbck/samba/home/user1@hourly-2012-03-07-00
success
attempting destroy nasbck/samba/home/user2@hourly-2012-03-07-00
success
attempting destroy nasbck/samba/home/user3@hourly-2012-03-07-00
success


receiving incremental stream of nasstore/samba@hourly-2012-03-07-02 into nasbck/samba@hourly-2012-03-07-02
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of nasstore/samba/profiles@hourly-2012-03-07-02 into nasbck/samba/profiles@hourly-2012-03-07-02
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of sanstore/samba/home@hourly-2012-03-07-02 into nasbck/samba/home@hourly-2012-03-07-02
received 312B stream in 2 seconds (156B/sec)
receiving incremental stream of nasstore/samba/home/user1@hourly-2012-03-07-02 into nasbck/samba/home/user1@hourly-2012-03-07-02
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of nasstore/samba/home/user2@hourly-2012-03-07-02 into nasbck/samba/home/user2@hourly-2012-03-07-02
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of nasstorage/samba/home/user3@hourly-2012-03-07-02 into nasbck/samba/home/user3i@hourly-2012-03-07-02
received 312B stream in 1 seconds (312B/sec)

Thanks for your time.


regards
Johan
 
I think it's -F that removes on the recieving side.
-R just means send recursively.
So
Code:
zfs send -R something@snap
Would send something@snap AND something/else@snap
 
Thanks for your answer.
I did re read the man page, and the -F is what makes the snapshots removal on the receiving side.

regards
Johan
 
And now I know why I used the -F on the receiving side.
The receiving side is doing nothing, just accepts snapshots.
But with one send I got the following.

Code:
receiving incremental stream of nasstorage/samba@hourly-2012-03-08-10 into nasbck/samba@hourly-2012-03-08-10
received 312B stream in 3 seconds (104B/sec)
receiving incremental stream of nasstorage/samba/profiles@hourly-2012-03-08-10 into nasbck/samba/profiles@hourly-2012-03-08-10
cannot receive incremental stream: destination nasbck/samba/profiles has been modified
since most recent snapshot
warning: cannot send 'nasstorage/samba/home/user1@hourly-2012-03-08-10': Broken pipe

How can I overcome this, without destroing old snapshots.

Maybe I am doing this wrong with the hourly snapshots.
Maybe it is better to use rsync, and after a rsync on the receiving side take the snapshot.

But why use other tools when zfs has it already.
I would also loose the ACL's on the files and directory's

regards
Johan
 
I think what is happening is that you have atime set on the destination dataset and if you make a directory listing of the dataset then ZFS will treat it as changed compared to the last snapshot because of changed atime timestamps. I would turn off atimes and try again.
 
Thanks I will try to set atime to off on the receiving side!
I think it is the atime, because nothing else happens on that system.

Thanks again all.

regards
Johan
 
I don't believe that it destroys "old" snapshots on the backup target. I've used send and recv for a long time, and never once saw it deleting snapshots that I didn't expect.

I think it is deleting "new" ones... ones created on the backup system that do not exist on the source system, and did not originate there. Do you have some automatic snapshot scripts running on the backup server?
 
man zpool said:
-F

Force a rollback of the file system to the most recent snapshot before performing the receive operation. If receiving an incremental replica-
tion stream (for example, one generated by zfs send -R -[iI]), destroy snapshots and file systems that do not exist on the sending side.

Also another possibility is you deleted the snapshots on the sending side.

and thinking about my previous post, although I've used send and recv a lot, I never actually wanted to keep any very old snapshots on the backup that I don't have on the source. I have scripts that automatically delete them on both. (snapshots eat up mega-space when you have 100s of GB flowing in and out every day instead of just collecting)
 
I want to keep the NAS itself clean from snapshots. So yes, I create them, send them and a destroy them as soon as they are send. So my NAS has just two snapshots for every periodic part (hourly, daily, weekly and monthly). On the backup server, we have some big 3 TB drives which gives us the room to hold these snapshots.

But as you stated, the -F on the receiving side behaves different when the -R is used on the sending side. It will delete some old snapshots that are not on the sending side. I have one zfs data set that has no children and that data set gave me no problem with -F on the receiving side. So I was a little confused why the recursive data set did remove the old snapshots. So that is the reason I blamed -R

Setting atime to off solved the message that the destination has been modified.

Thanks again
regards
Johan
 
@Sylhouette,

I have been really busy these days and just saw this now! If you don't use -F your snapshots will not be destroyed in the receiving host. I should have mentioned that in my blog!

George
 
Back
Top