Solved There is no enough space anymore to copy again the deleted files from / to a zpool backup previously created.

Hello to everyone.

ZFS represents for me a forever challenge. It always presents complex situations to solve and understands. What I'm trying to do is to copy again all the files that were present inside the zpool "31-03-2022-b",after previously upgraded 13.1 to 14-CURRENT. Since the upgrade didn't work and in any case I don't need to use the 14-CURRENT anymore, I would like to delete all its files and copy again the 13.1 files that I had previously copied to another disk, but to my surprise I see that there is no longer the same space as there was at the beginning on the (dataset or zpool ?) "31-03-2022-b". Instead of perform a new installation,I'm trying to remove manually every file and copy the files previously backupped and related to FreeBSD 13.1,that was previously installed to the same zpool. It seems to me that more files I remove,less space remains in the zpool. And when I copy the same files that I deleted previously, the space doesn't increase again. These are the commands that I issue to access the zpool and to copy the files from the external disk to it.

Code:
zpool import -f -R /mnt/zroot2 zroot2   ---> external disk / zpool
zpool import -f -R /mnt/zroot zroot
zfs mount zroot/ROOT/31-03-2022-b (original zpool)
cd /mnt/zroot
rm -r usr
cd /mnt/zroot2/zroot2/Freebsd-13 (directory where I have copied the FreeBSD 13.1 files)
rsync -avxHAX usr --exclude '*.core' /mnt/zroot (copying the 13.1 files from the external disk to the source zpool)

unfortunately I'm not able to copy again the same files on the starting zpool,since it says that the space available is not enough. Now it says : 17.9 GiB of 45.3 GiB free (60% used) ; I'm sure that at the beginning on this zpool there was more than 45 GB.

This is what happens after some time I started to copy the usr directory from the external disk to the source disk (I repeat : the same directory at the beginning fits perfectly on the source disk)

Code:
.....
usr/local/www/apache24/
rsync: [receiver] mkstemp "/mnt/zroot/usr/local/share/zenity/.zenity.ui.ODPCn1" failed: No space left on device (28)
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/local/www/nginx-dist" failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/local/www/wordpress" failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
rsync: [generator] failed to set times on "/mnt/zroot/usr/local/x86_64-portbld-freebsd13.1": No space left on device (28)
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/local/x86_64-portbld-freebsd13.1/bin" failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/local/x86_64-portbld-freebsd13.1/lib" failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
usr/local/www/nginx-dist/
usr/local/www/wordpress/
usr/local/x86_64-portbld-freebsd13.1/
usr/local/x86_64-portbld-freebsd13.1/bin/
usr/local/x86_64-portbld-freebsd13.1/lib/
rsync: [generator] failed to set times on "/mnt/zroot/usr/no": No space left on device (28)
usr/no/
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/no/sbin-n" failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/no/sbin-si" failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/no/sbin_" failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
usr/no/sbin-n/
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/no/sr" failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
usr/no/sbin-si/
usr/no/sbin_/
usr/no/sr/
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/no/src-" failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
usr/no/src-/
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/no/src-si" failed: No space left on device (28)
*** Skipping any contents from this failed directory ***
usr/no/src-si/
rsync: [generator] failed to set times on "/mnt/zroot/usr/obj": No space left on device (28)
rsync: [generator] recv_generator: mkdir "/mnt/zroot/usr/obj/usr" failed: No space left on device (28)
 
To be able to answer the question more information is needed.
In future try to provide data which allows answering the question.
Code:
zpool list -v
zfs list -o space
might provide some insight.
Also in order to copy you can also use "clone".
 
Code:
# zpool list -v

NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT

zroot        460G   446G  14.2G        -         -    76%    96%  1.00x    ONLINE  /mnt/zroot
  gpt/zfs0   460G   446G  14.2G        -         -    76%  96.9%      -    ONLINE
zroot2       928G   776G   152G        -         -     4%    83%  1.00x    ONLINE  /mnt/zroot2
  da2p4      928G   776G   152G        -         -     4%  83.6%      -    ONLINE

# zfs list -o space

NAME                                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD

zroot                                            0B   446G        0B     96K             0B       446G
zroot/ROOT                                       0B   434G        0B     96K             0B       434G
zroot/ROOT/13.1-RELEASE-p2_2022-11-11_174736     0B   272K        0B    272K             0B         0B
zroot/ROOT/13.1-RELEASE-p3_2022-11-17_193042     0B   452K        0B    452K             0B         0B
zroot/ROOT/13.1-RELEASE_2022-09-01_041825        0B  1.09M        0B   1.09M             0B         0B
zroot/ROOT/13.1-RELEASE_2022-12-18_110524        0B     8K        0B      8K             0B         0B
zroot/ROOT/31-03-2022-a                          0B   524K        0B    524K             0B         0B
zroot/ROOT/31-03-2022-b                          0B   434G      389G   45.3G             0B         0B
zroot/tmp                                        0B  15.0M        0B   15.0M             0B         0B
zroot/usr                                        0B  8.71G        0B    120K             0B      8.71G
zroot/usr/home                                   0B   192K        0B    192K             0B         0B
zroot/usr/ports                                  0B  8.71G        0B   8.71G             0B         0B
zroot/usr/src-                                   0B    96K        0B     96K             0B         0B
zroot/var                                        0B  2.46G        0B    136K             0B      2.46G
zroot/var/audit                                  0B    96K        0B     96K             0B         0B
zroot/var/crash                                  0B  1.11G        0B   1.11G             0B         0B
zroot/var/log                                    0B  4.09M        0B   4.09M             0B         0B
zroot/var/mail                                   0B  1.33G        0B   1.33G             0B         0B
zroot/var/tmp                                    0B  18.1M        0B   18.1M             0B         0B

zroot2                                         124G   776G        0B    776G             0B      19.6M
zroot2/ROOT                                    124G   280K        0B     96K             0B       184K
zroot2/ROOT/default                            124G   184K        0B    184K             0B         0B
zroot2/tmp                                     124G    96K        0B     96K             0B         0B
zroot2/usr                                     124G   384K        0B     96K             0B       288K
zroot2/usr/home                                124G    96K        0B     96K             0B         0B
zroot2/usr/ports                               124G    96K        0B     96K             0B         0B
zroot2/usr/src                                 124G    96K        0B     96K             0B         0B
zroot2/var                                     124G   576K        0B     96K             0B       480K
zroot2/var/audit                               124G    96K        0B     96K             0B         0B
zroot2/var/crash                               124G    96K        0B     96K             0B         0B
zroot2/var/log                                 124G    96K        0B     96K             0B         0B
zroot2/var/mail                                124G    96K        0B     96K             0B         0B
zroot2/var/tmp                                 124G    96K        0B     96K             0B         0B
 
I don't know your history, so I could be missing some details but it seems to me you are experimenting with FreeBSD and ZFS; you can learn a lot from that. When you are in need of nitty gritty details it generally is useful to have an understanding of important properties and fundamentals of ZFS.

ZFS is a COW (Copy On Write) file system and this has big implications for disk space management. Disk space management also is about how the use of snapshots affects disk space usage. Snapshots initially take up practically zero space; that changes with the passage of time when you experiment with additional installations of FreeBSD versions and take snapshots. A snapshot freezes the state of your files; that means deleting files after the snapshot has been taken will remove the deleted files from your current view, but they are still contained by the snapshot: the size of the snapshot has now increased because of those file deletions. If you don't know that I can imagine that you think: "It seems to me that more files I remove,less space remains in the zpool."

When you are aware of the effects that the use of ZFS and snapshots or clones (a clone is a writable version of a snapshot) can have on disk space consumption, you'll have an idea where to look. Understanding the ZFS tools* at your disposal, you'll know how to get more insight to find answers to your problem by yourself or provide useful data for others to help you.

Rich (BB code):
# zfs list -o space
NAME                                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
[...]
zroot/ROOT/31-03-2022-b                          0B   434G      389G   45.3G             0B         0B
Highlighted you see the high snapshot disk space usage. For snapshots and their management have a look at:
For ZFS (including ZFS disk space management and snapshots) from a user and admin point of view, I recommend you have a look at the two ZFS (e)books that are mentioned in FreeBSD Development: Books, Papers, Slides. I have found these two books a most worthwhile investment of time and money; they contain a wealth of useful practical information that would have cost me much more to find the equivalent at reliable sources on the internet.

___
* like zfs-list(8): zfs list -o space and zfs list -t snapshot
 
I'm trying to learn ZFS,but I find it tricky. I'm not sure if I need the features that it offers. Maybe for me it's enough UFS.
 
With 96% your pools are nearly overflowing.
For either filesystem - especially ZFS - it's recommended to not exceed the capacity over 60...70%,
due the filesystem does not have any air left to breath - not to mention write additional data to it.
That's why your pools also have an fragmentationrate of 76% - the system has no much chance to clean up anymore
(i bet your drives are working continously like mad 😁 )

However,
your are using ZFS.
Use its benefits.
You may add additional partitions/disks to the pool(s) to enlarge them.

(And of course:
  • check for useless data
  • check if something fills up your pools
  • delete useless data (obsolete snapshots, see post above from Erichans)
  • do backups first!
)

Of course it's your decision if you really need/want zfs, since ufs is a very good fs, too, anyway.
And you're right that zfs is not that easy as FAT32 😁 - but it's no rocket science neither. 😎

Its feature of snapshots is really a very nice thing to have.
Butsince in my eyes it may the only usable feature of zfs on a single partition pool, and snapshots are also provided by other fs (ufs does, I think), I don't really see no actual use for zfs on single partition pools.
To me the largest benefits of zfs lie within combining partitions to pools,
thus creating data storage security due redundancy - of course there are also ways having raid on other filesystems -
and to enlarge pools by adding disks.

A couple of weeks ago I decided my "working-storage" was too small with 256G and wanted to enlarge it to 1TB.
So I added to the mirrored pool of two disks (one partition only each) the new two 1TB disks, resilvered, and removed the old, smaller ones.
Voilá pool moved from 2x256G to 2x1TB.

Very useful tip I learned recently:
Use GPT labels!
You may follow the link jbo posted.

Anyway:
ZFS is worth a try.
If you have some spare hardware - another old machine and a couple of old HDDs,
experiment!
You may try out insane things without losing valuable stuff, learn a lot, and become confident on what you're doing.

Buone feste e Auguri!
 
What I'm trying to do is to copy again all the files that were present inside the zpool "31-03-2022-b",after previously upgraded 13.1 to 14-CURRENT. Since the upgrade didn't work and in any case I don't need to use the 14-CURRENT anymore, I would like to delete all its files and copy again the 13.1 files that I had previously copied to another disk,
You are complicating things unnecessarily.

I'm trying to learn ZFS,but I find it tricky. I'm not sure if I need the features that it offers.
One of the advantages of ZFS on FreeBSD is to create boot environments with bectl(8).
Rich (BB code):
DESCRIPTION
     The bectl command is used to setup and interact with ZFS boot
     environments, which are bootable clones of datasets.

     Boot environments allow the system to be upgraded, while preserving the
     old system environment in a separate ZFS dataset.

Create a new boot environment, activate the boot environment, reboot system, upgrade system.

If the upgraded system is not to your liking, activate old boot environment or choose after booting the system at the boot menu under 'Options: 8. Boot Environments' old BE, boot system, destroy upgraded boot environment, all with bectl(8).

It's that easy. No copy files back and forth on external disks.

Also it's much better to practice on a virtual machine than on real hardware beforehand than cleaning up a messed up system you now have.

If I'm not mistaken you run bhyve(8) hypervisor. There are management systems like sysutils/vm-bhyve which can create a clone from a existing VM. On that cloned VM you can test as you wish without been worried messing up thing. If messed up, destroy the VM, clone a new one.

Easy cloning VMs with sysutils/vm-bhyve is another advantage for ZFS.

vm(8)
Rich (BB code):
 clone name[@snapshot] new-name
             Create a clone of the virtual machine name, as long as it is
             currently powered off.  The new machine will be called new-name,
             and will be ready to boot with a newly assigned UUID and empty
             log file.

             If no snapshot name is given, a new snapshot will be taken of the
             guest and any descendant datasets or ZVOLs.  If you wish to use
             an existing snapshot as the source for the clone, please make
             sure the snapshot exists for the guest and any child ZVOLs,
             otherwise the clone will fail.

             Please note that this function requires ZFS.
 
A lot of you guys gave to me good advices,but they are abstract and general. They require a lot of study and efforts by me to understand how to do what I could try to do : not to study every aspect of ZFS or a lot of them,because I don't need to do that and I don't want,but only how to free the necessary space on the zpool pool to copy again the FreeBSD files that were already there. I would like to have some direct suggestions (and pratical examples) about how to do this specific task only. And I have another wish : not to be overhelmed with a lot of technical lectures. Thanks.
 
Not so easy,I think :

Code:
# clone
clone: Command not found.

---> If I'm not mistaken you run bhyve(8) hypervisor. There are management systems like sysutils/vm-bhyve which can create a clone from a existing VM.

The FreeBSD installation damaged is not a VM,it's a physical one. Anyway I don't use that wrapper.
 
I'm trying to learn ZFS,but I find it tricky. I'm not sure if I need the features that it offers. Maybe for me it's enough UFS.

Far be it from me to discourage anybody from learning any new tricks, and as you see you'll get lots of helpful and cluey hand-holding here, however ...

Every day that I see people struggling with ZFS - not only 'newbies' either - is another day I'm reminded that I've never had a real problem with UFS in, um, 24 years, although ...

These days I'm just running a couple of laptops and a phone. Were I still running servers I'm certain I'd think differently. As always, YMMV.
 
I tend to always omit that I use the PC as a desktop / home user/ as hobbyist and only a little bit as a server / for production,but this is a fundamental difference. I love to learn new things,but only if I don't feel stressed. For what concerns ZFS,I'm split into two different emotional states : 1) to make a step back to UFS,who/what makes me spend so much time and energy learning something tricky and that seems to behaves always differently and that offers different techniques but also situations to learn and to swear,but 2) I like the challenges...
 
zfs has the advantage that you easily can take snapshots and revert to a previous state.
ufs has the advantage that it is simple and rock-stable.
Some knowledge of zfs does not hurt even when you use ufs.
 
So,could this command work as expected ?

Code:
clone /mnt/zroot2/zroot2/Freebsd-13 /mnt/zroot

where :

/mnt/zroot2/zroot2/Freebsd-13 : is the SOURCE directory now ,where I have copied all the files of FreeBSD 13.1 using the command :

Code:
rsync -avxHAX * --exclude '*.core' /mnt/zroot2/zroot2/Freebsd-13

/mnt/zroot is the zpool (DEST directory now) where originally were stored the FreeBSD 13 files,but now the files stored there are mixed. Some of them comes from 13,some other from 14. I suppose that I should remove every file from the DEST directory before running the clone command ? I always mount / import the zpool (with the snapshot inside ?) with these commands :

Code:
zpool import -f -R /mnt/zroot zroot
zfs mount zroot/ROOT/31-03-2022-b
 
Looks good.
You can use:

Code:
clone -y -d -v 1 /mnt/source /mnt/destination¸

This command will delete the destination before cloning.
 
Looks good.
You can use:

Code:
clone -y -d -v 1 /mnt/source /mnt/destination¸

This command will delete the destination before cloning.

What I don't understand is why if I use the clone command should work but if I use the rsync command it says that there is not enough space available. Whats so special with the clone command that rsync does not have ?
 
unfortunately it didn't work. just happened the same as using rsync.

Code:
# clone -y -d -v 1 /mnt/zroot2/zroot2/Freebsd-13 /mnt/zroot
......
File /mnt/zroot/.cshrc could not be opened for writing: No space left on device.
Destination directory /mnt/zroot/data could not be created: No space left on device.
Destination directory /mnt/zroot/bin could not be created: No space left on device.
Destination directory /mnt/zroot/usr could not be created: No space left on device.
File /mnt/zroot/.profile could not be opened for writing: No space left on device.
438029 items copied, 43190.0 MB in 3938.48 s -- 11.0 MB/s
Leaked memory: 0 bytes
1495 errors occured.

Where have been copied the files ? I don't see them on the destination directory...

Code:
/mnt/zroot # ls
Backup    etc_old    media    opt    var
 
---> To make space , in the case you don't need this data :

I don't know if I need this data. I mean,I don't know what are the data that remains inside of it ,actually. It seems that the clone command removed almost every directory inside of it and it copied the previously backup files somewhere,but I'm not able to see where. My question is if I will be able to boot again this system if I remove the "31-03-2022-b" snapshot and if I copy the old files anyway on the same (ZFS) disk.

I have copied the files that I have extracted from the snapshot called "31-03-2022-b" to another disk,"formatted" with the UFS file system because I wanted to check if they would have been able to boot (infact they are related to a FreeBSD 13.1 installation) but they didn't. It seems that those installation files are bound to a ZFS style disk. Is there a method to convert them,to make work a ZFS/Freebsd 13.1 installation into a UFS/Freebsd 13.1 installation,eventually ? The error given is that it can't find the boot entry or something like this.
 
Code:
# zfs list -o space

NAME                                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD

zroot                                          152K   446G        0B     96K             0B       446G
zroot/ROOT                                     152K   434G        0B     96K             0B       434G
zroot/ROOT/13.1-RELEASE-p2_2022-11-11_174736   152K   272K        0B    272K             0B         0B
zroot/ROOT/13.1-RELEASE-p3_2022-11-17_193042   152K   452K        0B    452K             0B         0B
zroot/ROOT/13.1-RELEASE_2022-09-01_041825      152K  1.09M        0B   1.09M             0B         0B
zroot/ROOT/13.1-RELEASE_2022-12-18_110524      152K     8K        0B      8K             0B         0B
zroot/ROOT/31-03-2022-a                        152K   524K        0B    524K             0B         0B
zroot/ROOT/31-03-2022-b                        152K   434G      409G   25.0G             0B         0B
zroot/tmp                                      152K  15.0M        0B   15.0M             0B         0B
zroot/usr                                      152K  8.71G        0B    120K             0B      8.71G
zroot/usr/home                                 152K   192K        0B    192K             0B         0B
zroot/usr/ports                                152K  8.71G        0B   8.71G             0B         0B
zroot/usr/src-                                 152K    96K        0B     96K             0B         0B
zroot/var                                      152K  2.46G        0B    136K             0B      2.46G
zroot/var/audit                                152K    96K        0B     96K             0B         0B
zroot/var/crash                                152K  1.11G        0B   1.11G             0B         0B
zroot/var/log                                  152K  4.09M        0B   4.09M             0B         0B
zroot/var/mail                                 152K  1.33G        0B   1.33G             0B         0B
zroot/var/tmp                                  152K  18.1M        0B   18.1M             0B         0B

# zfs list -t snapshot

NAME                                            USED  AVAIL     REFER  MOUNTPOINT

zroot/ROOT/31-03-2022-b@2022-03-31-01:40:42-0     0B      -     43.4G  -
zroot/ROOT/31-03-2022-b@2022-03-31-01:41:55-0     0B      -     43.4G  -
zroot/ROOT/31-03-2022-b@2022-09-01-04:18:25-0  8.17G      -      308G  -
zroot/ROOT/31-03-2022-b@2022-11-11-17:47:36-0  2.41G      -      321G  -
zroot/ROOT/31-03-2022-b@2022-11-17-19:30:42-0  7.79G      -      343G  -
zroot/ROOT/31-03-2022-b@2022-12-18-11:05:24-0  25.6G      -      326G  -
 
Code:
# zpool import -f -R /mnt/zroot zroot
# zfs mount zroot/ROOT/31-03-2022-b
# bectl list
libbe_init("") failed.
bectl isn’t going to work on an altroot-imported filesystem

1) Delete an old boot environment you know you don’t need. Perhaps zroot/ROOT/13.1-RELEASE-p2_2022-11-11_174736
2) set the zpool bootfs property (on zroot) to point to the one you want.
3) make sure all the boot environments (zroot/ROOT/*) are set to canmount=noauto and mountpoint=/
4) reboot
 
Back
Top