Solved Issue with cloning working system with ZFS

Hello,

Any help with an issue is greatly appreciated.

I am trying to clone a small working system with zfs send.

Have created a recursive snapshot with zfs snapshot -r flash@transfer (the working pool is flash).

Have written datasets to a file like this: zfs send -R flash/usr@transfer > flash.zfs

But after that the file flash.zfs seems to be smaller than the total allocated space on the ZFS pool and when trying to restore the datasets on a new pool, only the short list of all datasets is created.
zfs receive -Fvu hdd_sys < flash.zfs

No error messages. I have tried to google and read several posts, but no help.

My question is how to proceed and where to look at?
 
I have copied a snapshot recently from one PC to a second one. It was not about a complete working system but a jail. The zfs send has been similar to yours including the -R option.

On the receiving system I have run cat flash.zfs|zfs receive hdd_sys. This has replicated the jail including all previous snapshots to hdd_sys/jailname. I have not used the -u option because I wanted the dataset to be mounted.

Disclaimer: I am not 100% sure if I missed anything regarding the options. Unfortunately the commands have not found their way into the command history :-(.
 
I have copied a snapshot recently from one PC to a second one. It was not about a complete working system but a jail. The zfs send has been similar to yours including the -R option.

On the receiving system I have run cat flash.zfs|zfs receive hdd_sys. This has replicated the jail including all previous snapshots to hdd_sys/jailname. I have not used the -u option because I wanted the dataset to be mounted.
Done that before, but running to an issue this time. Seems that zfs send -R flash/usr@transfer > flash.zfs does not write all the datasets into the file. Files seems to be too small, but there are no errors logged. zfs list -t snap shows that all the snapshots have been created.

Do not have an idea how to proceed. I have deleted and recreated the snapshots and also tried multiple times.

Code:
# ls -l flash.zfs
-rw-r--r--  1 root  wheel  50699515896 Nov 27 11:54 flash.zfs

But there is over 100G on the pool.
 
I have made an experiment as below. My PC has two zpools. The root pool is named troot. A data pool is tank. The data pool has a data set called Dokumente and has a few snapshots.
Code:
 # zfs list -t snapshot | grep tank|grep Dokumente
tank/data/Dokumente@2023-09-24              47K      -  77.8M  -
tank/data/Dokumente@2023-10-04              39K      -  78.1M  -
tank/data/Dokumente@2023-10-15            85.5K      -  78.6M  -
tank/data/Dokumente@2023-11-04            96.5K      -  80.1M  -
tank/data/Dokumente@2023-11-21            96.5K      -  81.8M  -
tank/data/Dokumente@2023-11-27               0B      -  82.1M  -
The size is documented as below.
Code:
# zfs get all tank/data/Dokumente|less
NAME                 PROPERTY              VALUE                            SOURCE
tank/data/Dokumente  type                  filesystem                       -
tank/data/Dokumente  creation              Fri Sep 15 11:44 2023            -
tank/data/Dokumente  used                  82.5M                            -
tank/data/Dokumente  available             58.8G                            -
tank/data/Dokumente  referenced            82.1M                            -
tank/data/Dokumente  compressratio         1.31x
tank/data/Dokumente  usedbysnapshots       417K                             -
tank/data/Dokumente  usedbydataset         82.1M                            -
tank/data/Dokumente  usedbychildren        0B                               -
...
tank/data/Dokumente  refcompressratio      1.31x                            -
tank/data/Dokumente  written               0                                -
tank/data/Dokumente  logicalused           108M                             -
tank/data/Dokumente  logicalreferenced     108M
The size matches with
Code:
~/.tank/Dokumente> du -hs
 82M    .
Now I generate the intermediate file by zfs send -R tank/data/Dokumente@2023-11-27 > docs.zfs.
The size matches with the logicalused value above.
Code:
# ls -lh docs.zfs
-rw-r--r--  1 root wheel  109M Nov 27 12:31 docs.zfs
The data is transfered to the root pool troot by cat docs.zfs | zfs receive -u troot/docs and not yet mounted. Then the mountpoint is changed with zfs set mountpoint=/mnt troot/docs. Now the data set is already mounted.
Code:
# zfs list
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
tank                                          2.77G  58.8G    24K  none
tank/data                                     2.77G  58.8G  87.6M  /usr/home/chris/.tank
tank/data/Archiv                               808M  58.8G   807M  /usr/home/chris/.tank/Archiv
tank/data/Dokumente                           82.5M  58.8G  82.1M  /usr/home/chris/.tank/Dokumente
...
troot/docs                                    85.2M  92.7G  84.4M  /mnt
The REFER value is slightly different as well as the compress ratio. A comparison of the directories using mtree(8) shows that the content is similar. the snapshots are there as well.
Code:
# zfs list -t snapshot
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
...
tank/data/Dokumente@2023-09-2
troot/docs@2023-09-24                      112K      -  79.9M  -
troot/docs@2023-10-04                       96K      -  80.3M  -
troot/docs@2023-10-15                      144K      -  80.8M  -
troot/docs@2023-11-04                      168K      -  82.4M  -
troot/docs@2023-11-21                      168K      -  84.0M  -
troot/docs@2023-11-27                        0B      -  84.4M  -
...
troot/docs@2023-09-24                      112K      -  79.9M  -
troot/docs@2023-10-04                       96K      -  80.3M  -
troot/docs@2023-10-15                      144K      -  80.8M  -
troot/docs@2023-11-04                      168K      -  82.4M  -
troot/docs@2023-11-21                      168K      -  84.0M  -
troot/docs@2023-11-27                        0B      -  84.4M  -
I hope this example is helpful. Regarding sizes zfs get all pool/dataset gives useful information.
 
Back
Top