ZFS zfs send overwritting receiving server file

Hi,

I have 2 servers that I want to backup to another server outside the datacentre..
Could you please tell me what is the best option to do so safely?

So far I created the following on the backup server..
zfs create zroot/trinity
zfs create zroot/r610

ps: The backup zfs pool is also called zroot

When I zfs send the data from server 1 (trinity) everything is ok
When I zfs send data from server1 (r610), the backup home directory get overwritten by the home directory of the r610 server

Could anyone please tell me why or help me understand?

when I do zfs list -t snapshot I can see that both zfs send was recieved sucessfully

we keep loosing remote access via ssh to the backup server has the user is no longer their

script for trinity
Code:
#!/bin/sh
export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:
pool="zroot"
destination="zroot/trinity"
host="82.27.xxx.xxx"

if [ -f /tmp/backupscript.lock ]; then
        logger -p local5.notice "Backup did not complete yesterday FAILED"
    echo  "Backup did not complete yesterday FAILED" | /usr/bin/mail -s "Backup Report" root
        exit 1
else
        touch /tmp/backupscript.lock
fi

today=`date +"$type-%Y-%m-%d"`
yesterday=`date -v -1d +"$type-%Y-%m-%d"`
day=`date -v -30d +"$type-%Y-%m-%d"`

# create today snapshot
snapshot_today="$pool@$today"

# look for a snapshot with this name
if zfs list -H -o name -t snapshot | sort | grep "$snapshot_today$" > /dev/null; then
        logger -p local5.notice "snapshot, $snapshot_today, already exists skipping"
else
        logger -p local5.notice "Taking todays snapshot, $snapshot_today"
        zfs snapshot -r $snapshot_today
fi

# look for yesterday snapshot
snapshot_yesterday="$pool@$yesterday"
if zfs list -H -o name -t snapshot | sort | grep "$snapshot_yesterday$" > /dev/null; then

        if zfs send -R -i $snapshot_yesterday $snapshot_today | mbuffer -q -v 0 -s 128k -m 1G | ssh root@$host "mbuffer -s 128k -m 1G | zfs receive -Fdu $destination" > 0; then
                logger -p local5.notice "Backup OK"
        echo  "Backup OK" | /usr/bin/mail -s "Backup Report" root
        else
                logger -p local5.error "Backup FAILED"
        echo  "Backup FAILED" | /usr/bin/mail -s "Backup Report" root
        exit 1
        fi
        rm /tmp/backupscript.lock
        zfs destroy -r $day
        exit 0
else
        logger -p local5.error "missing yesterday snapshot Backup FAILED"
    exit 1
fi
Script for r610
Code:
#!/bin/sh
export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:
pool="zroot"
destination="zroot/r610"
host="82.27.xxx.xxx"

if [ -f /tmp/backupscript.lock ]; then
        logger -p local5.notice "Backup did not complete yesterday FAILED"
    echo  "Backup did not complete yesterday FAILED" | /usr/bin/mail -s "Backup Report" root
        exit 1
else
        touch /tmp/backupscript.lock
fi

today=`date +"$type-%Y-%m-%d"`
yesterday=`date -v -1d +"$type-%Y-%m-%d"`
day=`date -v -30d +"$type-%Y-%m-%d"`

# create today snapshot
snapshot_today="$pool@$today"

# look for a snapshot with this name
if zfs list -H -o name -t snapshot | sort | grep "$snapshot_today$" > /dev/null; then
        logger -p local5.notice "snapshot, $snapshot_today, already exists skipping"
else
        logger -p local5.notice "Taking todays snapshot, $snapshot_today"
        zfs snapshot -r $snapshot_today
fi

# look for yesterday snapshot
snapshot_yesterday="$pool@$yesterday"
if zfs list -H -o name -t snapshot | sort | grep "$snapshot_yesterday$" > /dev/null; then

        if zfs send -R -i $snapshot_yesterday $snapshot_today | mbuffer -q -v 0 -s 128k -m 1G | ssh root@$host "mbuffer -s 128k -m 1G | zfs receive -Fdu $destination" > 0; then
                logger -p local5.notice "Backup OK"
        echo  "Backup OK" | /usr/bin/mail -s "Backup Report" root
        else
                logger -p local5.error "Backup FAILED"
        echo  "Backup FAILED" | /usr/bin/mail -s "Backup Report" root
        exit 1
        fi
        rm /tmp/backupscript.lock
        zfs destroy -r $day
        exit 0
else
        logger -p local5.error "missing yesterday snapshot Backup FAILED"
    exit 1
fi
 
Is it actually overwriting the home dataset on the backup server, or is it writing to zroot/r610/path/to/home?
 
getopt
Are you refering to /zroot ?
Will that be better?
zfs create trinity
zfs create r610

Then ... zfs receive -Fdu trinity ... zfs receive -Fdu r610

usdmatt is it overwritting the home dataset on the backup server
 
.. I have been trying to understand the output of that command above...
Code:
on  /usr/home                                                                                   yes  zroot/r610/usr/home
Does this mean that the home directory of the r610 server is getting mounted to the /usr/home of the backup server and is the reason why my home directory is getting messed up all the time?
How can I fix it, why is this happening?
 
If you provided the output, I could likely provide a better answer, but here goes.

My guess is that you did not change the received canmount or mountpoint values after your initial receive. Even though you received with the unmounted flag, this does not change the values of the received variables. Once the pool is reimported, like during a reboot, all of the flags get evaluated and mountpoints inevitably land on top of each other if you’re not careful when doing a back up of systems with similar mount points.
 
If you provided the output, I could likely provide a better answer, but here goes.
Eric A. Borisch I provided the ouput in my post, sorry if its not clear..
The file is too bit to be pasted here so I uploaded on googledrive and can be accessed here:
https://drive.google.com/open?id=0B4-7cV6bkX_NelFmS01vSmFVZEE
https://drive.google.com/open?id=0B4-7cV6bkX_NelFmS01vSmFVZEE

My guess is that you did not change the received canmount or mountpoint values after your initial receive
Could you please tell me how to do that?

Thank you
 
I would guess Eric A. Borisch is on the right track. The dataset you are replicating has an identical mount point to the backup server's home directory and is getting mounted over top of the home directories. You could set canmount=noauto for the replicated dataset on the remote server.

Added: Hopefully none of these trigger changes that prevent the next replication.
zfs set canmount=noauto zroot/trinity
zfs set mounpoint="/trinity" zroot/trinity
zfs set readonly=on /zroot/trinity

Code:
if zfs list -H -o name -t snapshot | sort | grep "$snapshot_today$" > /dev/null; then...
Tangential point: Could the line below replace the line above?
Code:
if zfs list -H -o name -tsnap -d1 -S creation "$pool" | grep -q "^$snapshot_today$"; then...
or if you specifically want the youngest remote snapshot, you could use
Code:
zfs list -H -o name -tsnap -d1 -S creation "$pool" | grep -m1 -q "^$snapshot_today$"
 
I think it didn’t render quite right on my phone.

Yes, it looks like you have conflicting mountpoints; this is something that being able to filter options during receive will be nice. ZFS on Linux has that already, I think.

You’ll need to do something like jrm@ mentioned; altering the properties shouldn’t require a resend, and they will stick (“local” trumps “received”) on your next update. I would avoid receiving with -F after your initial seed; too much can go sideways is something (especially when scripted) isn’t just right.

Good luck!
 
Hi
Thank you very much for all your help..
So to correct the problem, I need to run the following command for both dataset?
zfs set canmount=noauto zroot/trinity
zfs set mounpoint="/trinity" zroot/trinity
zfs set readonly=on /zroot/trinity
---
zfs set canmount=noauto zroot/r610
zfs set mounpoint="/r610" zroot/r610
zfs set readonly=on /zroot/r610

Is that correct?
 
Hi
Thank you very much for all your help..
So to correct the problem, I need to run the following command for both dataset?
zfs set canmount=noauto zroot/trinity
zfs set mounpoint="/trinity" zroot/trinity
zfs set readonly=on /zroot/trinity
---
zfs set canmount=noauto zroot/r610
zfs set mounpoint="/r610" zroot/r610
zfs set readonly=on /zroot/r610

Is that correct?

You’ll want to focus on any mountpoint where cannount=on and the mountpoint is set to something which conflicts. The readonly property can likely just be set in the two locations above (the root of “destinations” of the zfs recv), but any mountpoints below that are also not “inherited” may need attention, either with canmount=noauto or changing the mountpoint.

zfs get -rt filesystem -s received mountpoint zroot

might be useful for finding the datasets with mountpoints that need attention.
 
Back
Top