Solved VM Migration (ZAP destroy & ZFS destroy/umount don't work)

Lamia

Well-Known Member

Reaction score: 50
Messages: 318

This thread is a follow up to the thread here .


Anyone know why zap destroy || zfs destroy fails on some filesystems like docker, iocage, etc? I created backups with both the core zfs and zap and transported the zap backups [using zap] between PCs.

I later tried "zap destroy" & "zfs destroy (-r) SNAPSHOT", which are meant to destroy all expired snapshots and all existing snapshots/backups respectively but no luck . All [supposedly destroyed] snapshots are still available after restarting the PC.

Code:
#zfs list
zroot/docker                                                                   20.9G  8.27G  1.10G  /usr/docker
zroot/docker/53e834dd0531caf69a79cb0552074b55c0b12d3281d9d176a04280d0678c76c4   346M  8.27G   346M  legacy
zroot/iocage/base/11.0-RELEASE/root/lib                                       5.98M  8.27G  5.98M  /iocage/base/11.0-RELEASE/root/lib

I have deleted all files in the dirs & unmounted all the filesystems - iocage, docker - yet they all come back mounted after restarting the PC. I also changed the filesystem properties e.g. 'zfs set canmount=off' but no luck.

I need a lot of free space; so I want all those filesystems/mountpoints gone.
 

rigoletto@

Daemon
Developer

Reaction score: 941
Messages: 1,927

I do not know what the exactly situation is but zap destroy will just destroy snapshots created by sysutils/zap. It does not touch on any snapshot not created by it self.

Also, there are anything useful on /var/log/messages?
 
Last edited by a moderator:
OP
OP
Lamia

Lamia

Well-Known Member

Reaction score: 50
Messages: 318

Thank you lebarondemerde. Nothing valuable in the messages log other than pf related logs - kern.crit, ospf no match, etc.

And yes, I mentioned that zap supposedly destroys snapshots made by it yet whenever I rerun the command, it shows that the files exist and are being destroyed AGAIN I need get rid of the mounts & filesystems..
 

rigoletto@

Daemon
Developer

Reaction score: 941
Messages: 1,927

IDK docker but are there any iocage jail or are you already destroyed all of them? I mean is iocage running?
There may have something preventing the snapshots to be destroyed but one maybe. Just a guess.

You may also look on those file system properties to see if there is anything relevant.

What if you destroy the filesystems?

If you do not mind to remove all snapshots you have, you can try THIS script.
 

Datapanic

Well-Known Member

Reaction score: 178
Messages: 370

Do the partitions you are trying to destroy have a "snap=on" set? Use zfs get zap:snap to find out. I think you then make a snapshot zap snap -v 1d (this would give it a 1 day life), then destroy it with zap destroy -v zfs list -H -t snap -o name can be used to list the snap snapshots.
 

ShelLuser

Son of Beastie

Reaction score: 1,726
Messages: 3,546

Anyone know why zap destroy || zfs destroy fails on some filesystems like docker, iocage, etc? I created backups with both the core zfs and zap and transported the zap backups [using zap] between PCs.

I later tried "zap destroy" & "zfs destroy (-r) SNAPSHOT", which are meant to destroy all expired snapshots
Hardly enough information to go on. What is the exact zfs command you used? You mention SNAPSHOT but that doesn't tell me anything, please be more specific.

Also: what output does the zfs command give you after you issued the destroy command?

(edit)

I don't know about zap (I tend not to rely on 3rd party software) but the zfs -r destroy command does not necessarily destroy expired snapshots. All that does is destroy the specified snapshot, and when -r is used it'll recurse the ZFS filesystems to find more optional matches.
 
OP
OP
Lamia

Lamia

Well-Known Member

Reaction score: 50
Messages: 318

IDK docker but are there any iocage jail or are you already destroyed all of them?
I have destroyed all of them. I wanted to use iocage some years back. I gave up and later joined the ezjail bandwagon.

You may also look on those file system properties to see if there is anything relevant.
I have reset the canmount and I can't imagine what else to do.
What if you destroy the filesystems?
I am interested in what would do that. May I had that I have already deleted all files in the mounted directories.. I ran chflags and rm -rf.... But the zfs keeps getting mounted perhaps because zap created its own snapshots in them. And all attempt to destroy them has been futile.
If you do not mind to remove all snapshots you have, you can try THIS script.
I am trying it out now. And it has done a superb job so far - over 150GB freed & still counting :):)
 
OP
OP
Lamia

Lamia

Well-Known Member

Reaction score: 50
Messages: 318

I don't know about zap (I tend not to rely on 3rd party software) but the zfs -r destroy command does not necessarily destroy expired snapshots. All that does is destroy the specified snapshot, and when -r is used it'll recurse the ZFS filesystems to find more optional matches.
Thanks Shell Luser! The bottom line is that I needed to include more arguments in the zfs command. The iteration in the script now helps get rid of the entire hidden snapshots and other filesystems.
 
OP
OP
Lamia

Lamia

Well-Known Member

Reaction score: 50
Messages: 318

While this script suggested from lebarondemerde removed the zfs snapshots and freed over 150GB space, the filesystems are still there and unmounting still does not work.

Code:
#zfs list
zroot/docker/0d045ff5df700cf9b0b53c0ca258927e8bda0d5ff62e61c74450e82cdfaf69bc            zap:snap  off       inherited from zroot/docker
zroot/docker/0d045ff5df700cf9b0b53c0ca258927e8bda0d5ff62e61c74450e82cdfaf69bc@135478799  zap:snap  off       inherited from zroot/docker

zroot/iocage/jails                                                                       zap:snap  off       inherited from zroot/iocage
zroot/iocage/releases                                                                    zap:snap  off       inherited from zroot/iocage
zroot/iocage/releases/11.0-RELEASE                                                       zap:snap  off       inherited from zroot/iocage
zroot/iocage/releases/11.0-RELEASE/root                                                  zap:snap  off       inherited from zroot/iocage
zroot/poudriere                                                                          zap:snap  off       local
zroot/poudriere/data                                                                     zap:snap  off       inherited from zroot/poudriere
zroot/poudriere/data/.m                                                                  zap:snap  off       inherited from zroot/poudriere
zroot/poudriere/data/cache                                                               zap:snap  off       inherited from zroot/poudriere
zroot/poudriere/data/logs                                                                zap:snap  off       inherited from zroot/poudriere


zroot/var/audit                                                                          zap:snap  on        inherited from zroot/var
zroot/var/crash                                                                          zap:snap  on        inherited from zroot/var
zroot/var/log                                                                            zap:snap  on        inherited from zroot/var
I need zroot/var but not others.


Code:
#zfs get canmount
zroot/docker/fd4ee20bc5a53c292baa854b666c8b966c6e1c48a00dca6a84f0084fa531d1df            canmount  on        default
zroot/docker/fd4ee20bc5a53c292baa854b666c8b966c6e1c48a00dca6a84f0084fa531d1df@665974155  canmount  -         -
zroot/iocage                                                                             canmount  noauto    local
zroot/iocage/.defaults                                                                   canmount  on        default
zroot/iocage/releases/11.0-RELEASE                                                       canmount  on        default
zroot/iocage/releases/11.0-RELEASE/root                                                  canmount  on        default
zroot/poudriere                                                                          canmount  off       local
zroot/poudriere/data                                                                     canmount  on        default
I did change the canmount for these zfs names but they all get reset back to 'on'.
I want all the mounts/filesystems/data in the dirs gone.
 

jrm@

Daemon
Developer

Reaction score: 473
Messages: 1,205

Here are two reasons why zap would not destroy snapshots. First, as lebarondemerde said, if they were not created by zap [1]. Second, if they originated from a host other than the local host and you did not specify zap destroy -v <snapshot_origin_host> [2].

If you think snapshots created by zap [3] are not being destroyed, please show the output of zfs list -t snap just to show a few of the snapshot names you think should be destroy. Please also also show the output of zap destroy -v.

Other than the names, there is nothing special about the snapshots created by zap. For example, if you want to destroy all snapshots, try # zfs list -H -o name -t snapshot | xargs -n1 zfs destroy.

[1] The zap:snap property is only used to determine which datasets to snapshot. To determine which snapshots to destroy, pattern matching on the snapshot name is used.
[2] From the man page: By default, only snapshots originating from the local host are destroyed. If a comma separated list of hosts are specified, then only destroy snapshots originating from those hosts.
[3] The snapshot names look like <dataset>@ZAP_<hostname>_<timestamp>--<expiration>.
 
OP
OP
Lamia

Lamia

Well-Known Member

Reaction score: 50
Messages: 318

Other than the names, there is nothing special about the snapshots created by zap. For example, if you want to destroy all snapshots, try # zfs list -H -o name -t snapshot | xargs -n1 zfs destroy
Thanks jrm for your input. I was going to update this thread before some new contributions from other members. I would beg to differ that the above command should be changed to
Code:
# zfs list -H -o name -t snapshot | xargs -n1 zfs -rR destroy
Alternatively, the script should be updated to:
Code:
#!/bin/bash
for snapshot in `zfs list -H -t snapshot | cut -f 1`
do
zfs destroy -rR $snapshot
done
It was only after I used the '-rR' argument that the nested/children filesystems & snapshots were all gone.

That is, I ran the following for each zfs:
#zfs destroy -rR zroot/docker
#zfs destroy -rR zroot/iocage
......
And I could then remove/delete/unmount the dirs.
 
OP
OP
Lamia

Lamia

Well-Known Member

Reaction score: 50
Messages: 318

Please advise on how to proceed with the below:

Capacity on the source VM executing "zap rep -v":
Code:
# zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       11.7G  1.96T     75     27   376K   657K
zstorage    980G  1.02T     26     25   674K  1.22M
----------  -----  -----  -----  -----  -----  -----


# zfs list
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
zroot                                 11.7G  1.90T  68.5K  /zroot
zroot/ROOT                            5.53G  1.90T    96K  none
zroot/ROOT/default                    5.53G  1.90T  5.14G  /

Capacity on the backup VM before running "zap rep -v" at the source VM:
Code:
 # zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       1.68G  16.2G      0      0    634  4.80K
zstorage     125M  1.81T      0      0  8.49K  9.76K
----------  -----  -----  -----  -----  -----  -----
Capacity on backup VM after running "zap rep -v" at the source VM:
Code:
# zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       1.68G  16.2G      0      0    603  5.15K
zstorage    21.1G  1.79T      0      3  7.98K   270K
----------  -----  -----  -----  -----  -----  -----
The zroot@source (i.e. base OS) is on an expanded pool (not mirror, cache or whatsoever) of two disks (180GB SSD & 2TB HDD). I now want to change the disks to 20GB ONLY, where the 20GB serves the base OS. In other words, I want to remove the 2TB from the zroot pool and resize the main disk (with root/swap/boot) from 180GB to 20GB. A separate 2TB SSD with its own pool 'zstorage' will serve the jails & data (this task is pretty much completed).

I will be withdrawing the 2TB HDD in the zroot pool; I don't need it there even though I could use it for a mirror in the zstorage. Hardware maintenance is the responsibility of my hosting provider; so I will leave that for them. And I am not even sure that using the 2TB as a mirror for a 2TB SSD is a good practice even if I was running my own hardware. I am however wanting to use it in the second VM. It is an HDD; hence it is cheaper to be used as a backup than the SSD currently being used in the second VM.

I have completed a backup with zap in the second VM and I am about to change the disk size and re-install the base OS. After that, I will be running 'zap rep -v' on the backup VM to transfer data back to the source VM and mount the filesystems to their dirs as appropriate.

Question 1: Will this resizing procedure work without a system failure and loss of data/applications? As soon as a fresh install of FreeBSD is achieved, I will use zap to get all files/configs back on the source VM.
Question 2: I can imagine that zap snapshots were many; hence they doubled the allocated size on the source VM. The backup with zap has utilized over 21GB of space and I wanting to downsize the source VM hard disk size to 20GB after all the zroot is using 11.7GB AND "ncdu -1xo- /" shows Total disk usage: 5.1 GB. How do I use zap "rollback" and ensure that the base/main disk usage is not more than the 20GB?
Question 3: Are there other things/pre-cautions to take note of?
 
OP
OP
Lamia

Lamia

Well-Known Member

Reaction score: 50
Messages: 318

Are you running FreeBSD?
Yes, I am. And that script did work.
/me thinking, I must have many other scripts that are running and started with #!/bin/bash.

(Edit)
I am not sure that script is a subject of discussion now based on the current development. No offense :):). I unintentionally INTERCHANGABLY communicate in different tones, versions and types of English Language.
 

jrm@

Daemon
Developer

Reaction score: 473
Messages: 1,205

Yes, I am. And that script did work.
/me thinking, I must have many other scripts that are running and started with #!/bin/bash.
I was just asking, because I haven't done much testing of zap on Linux. I guessed you were running Linux, because /bin/bash does not exist (by default) on FreeBSD. Maybe I looked too quickly, but I don't see the port, shells/bash, creating a link in /bin either.
 
OP
OP
Lamia

Lamia

Well-Known Member

Reaction score: 50
Messages: 318

Please advise on how to proceed with the below:

Capacity on the source VM executing "zap rep -v":
Code:
# zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       11.7G  1.96T     75     27   376K   657K
zstorage    980G  1.02T     26     25   674K  1.22M
----------  -----  -----  -----  -----  -----  -----


# zfs list
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
zroot                                 11.7G  1.90T  68.5K  /zroot
zroot/ROOT                            5.53G  1.90T    96K  none
zroot/ROOT/default                    5.53G  1.90T  5.14G  /

Capacity on the backup VM before running "zap rep -v" at the source VM:
Code:
# zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       1.68G  16.2G      0      0    634  4.80K
zstorage     125M  1.81T      0      0  8.49K  9.76K
----------  -----  -----  -----  -----  -----  -----
Capacity on backup VM after running "zap rep -v" at the source VM:
Code:
# zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       1.68G  16.2G      0      0    603  5.15K
zstorage    21.1G  1.79T      0      3  7.98K   270K
----------  -----  -----  -----  -----  -----  -----
The zroot@source (i.e. base OS) is on an expanded pool (not mirror, cache or whatsoever) of two disks (180GB SSD & 2TB HDD). I now want to change the disks to 20GB ONLY, where the 20GB serves the base OS. In other words, I want to remove the 2TB from the zroot pool and resize the main disk (with root/swap/boot) from 180GB to 20GB. A separate 2TB SSD with its own pool 'zstorage' will serve the jails & data (this task is pretty much completed).

I will be withdrawing the 2TB HDD in the zroot pool; I don't need it there even though I could use it for a mirror in the zstorage. Hardware maintenance is the responsibility of my hosting provider; so I will leave that for them. And I am not even sure that using the 2TB as a mirror for a 2TB SSD is a good practice even if I was running my own hardware. I am however wanting to use it in the second VM. It is an HDD; hence it is cheaper to be used as a backup than the SSD currently being used in the second VM.

I have completed a backup with zap in the second VM and I am about to change the disk size and re-install the base OS. After that, I will be running 'zap rep -v' on the backup VM to transfer data back to the source VM and mount the filesystems to their dirs as appropriate.

Question 1: Will this resizing procedure work without a system failure and loss of data/applications? As soon as a fresh install of FreeBSD is achieved, I will use zap to get all files/configs back on the source VM.
Question 2: I can imagine that zap snapshots were many; hence they doubled the allocated size on the source VM. The backup with zap has utilized over 21GB of space and I wanting to downsize the source VM hard disk size to 20GB after all the zroot is using 11.7GB AND "ncdu -1xo- /" shows Total disk usage: 5.1 GB. How do I use zap "rollback" and ensure that the base/main disk usage is not more than the 20GB?
Question 3: Are there other things/pre-cautions to take note of?
All went well!!! There was no need for 'zap' but 'rsync'.
Below are the steps taken, perhaps someone someday might find them useful. I have added instructions to migrate jails to another disk & zpool.
Code:
#Migrate jails from baseos/host to another disk & zpool
% zpool create -m /zstorage zstorage /dev/vtbd2p1
% rsync -aAXv --exclude-from '/tmp/exclude-list.txt' /usr/jails/ /zstorage/jails

++++++++++++
The exclude-list.txt contains:
/dev/*
/proc/*
/sys/*
/tmp/*
/mnt/*
/media/*
/lost+found
/usr/ports/*
/zstorage/*
++++++++++++++++


# backup baseos in compressed form for future ref
# rsync -aAXv --exclude-from '/tmp/exclude-list.txt' / /zstorage/backupbaseos
% cd /ztorage/YOUR_CHOICE_DIR
% tar cvzf - /zstorage/backupbaseos | split -b 50000m - backupbaseos.tar.gz.

# Downsize VM OR Change Systems and do a  fresh FreeBSD installation using AutoZFS**

# Overwrite files in root "/" with the backup - note the use of additional argument "I" 
% rsync -aAXIv --exclude-from '/tmp/exclude-list.txt' /zstorage/backupbaseos /

###MAY NOT BE NECESSARY - BEGINS###
**I can't remember updating the loader.conf after rsyncing backupbaseos to / BUT I did when I mistakenly rsynced to /zroot/. 
**zroot, swap0, etc from previous vm will all replace the new ones in the new vm**
**Set your root mountpoint in loader.conf; otherwise, you will be completely locked out - i.e. both old and new ssh logins won't work **
% vi /boot/loader.conf
vfs.root.mountfrom="zfs:zroot"
###MAY NOT BE NECESSARY - ENDS###

## Restart and you should be able to login back with the old ssh credentials and all old vm pkgs should be working again
 

jrm@

Daemon
Developer

Reaction score: 473
Messages: 1,205

Glad you got things working. In case you have to do this again in the future, plain zfs snapshot/ zfs send/ zfs recv has advantages for dataset migration. Something like
Code:
zfs snapshot -r zroot@migration1
zfs send -RvecL  zroot@migration1 | zfs recv -svu zstorage/backup
zfs set mountpoint=/backup zstorage/backup
zfs snapshot -r zroot@migration2
zfs send -RvecL -I migration1 zroot@migration2 | zfs recv -svu zstorage/backup
Unless there are lots of changes happening on zroot, the incrementals should should be small. If not, you could repeat the last two steps as needed.
 
Top