Solved There is no enough space anymore to copy again the deleted files from / to a zpool backup previously created.

bectl isn’t going to work on an altroot-imported filesystem

1) Delete an old boot environment you know you don’t need. Perhaps zroot/ROOT/13.1-RELEASE-p2_2022-11-11_174736
2) set the zpool bootfs property (on zroot) to point to the one you want.
3) make sure all the boot environments (zroot/ROOT/*) are set to canmount=noauto and mountpoint=/
4) reboot

You might be missing the point of the problem here. There isn't any problem in the booting mechanism. The problem is that I'm not able to copy the files needed to boot again from the snapshot "31-03-2022-b" because there is no space there,even if I have already removed almost every file inside of it (me and even the clone command did that). I don't know how can I free the space necessary to copy again the files that were within that snapshot from the beginning. I don't think its useful to remove the other snapshots,because they are almost empty. Are you really sure that I can free space within the snapshot "31-03-2022-b" if I remove some almost empty snapshots ?
 
If you don’t need them, they are worth removing. Otherwise look at zfs get -r used root/ROOT and look for snapshots taking up space that you don’t need.
 
Code:
# zfs get -r used zroot/ROOT

NAME                                           PROPERTY  VALUE  SOURCE

zroot/ROOT                                     used      434G   -
zroot/ROOT/13.1-RELEASE-p2_2022-11-11_174736   used      272K   -
zroot/ROOT/13.1-RELEASE-p3_2022-11-17_193042   used      452K   -
zroot/ROOT/13.1-RELEASE_2022-09-01_041825      used      1.09M  -
zroot/ROOT/13.1-RELEASE_2022-12-18_110524      used      8K     -
zroot/ROOT/31-03-2022-a                        used      524K   -
zroot/ROOT/31-03-2022-b                        used      434G   -
zroot/ROOT/31-03-2022-b@2022-03-31-01:40:42-0  used      0B     -
zroot/ROOT/31-03-2022-b@2022-03-31-01:41:55-0  used      0B     -
zroot/ROOT/31-03-2022-b@2022-09-01-04:18:25-0  used      8.17G  -
zroot/ROOT/31-03-2022-b@2022-11-11-17:47:36-0  used      2.41G  -
zroot/ROOT/31-03-2022-b@2022-11-17-19:30:42-0  used      7.79G  -
zroot/ROOT/31-03-2022-b@2022-12-18-11:05:24-0  used      25.6G  -

as you can see,the only snapshot that takes almost all the space is the 31-03-2022-b.
Despite this,I have removed almost everything from that snapshot and when I mount it 
on the folder /mnt/zroot,as you can see below,you see almost nothing.
I don't know which kind of data still fills it and how can I remove the useless ones. 
Is there a command to defrag the snapshot ? Maybe it needs to be cleaned.

# zpool import -f -R /mnt/zroot zroot
# zfs mount zroot/ROOT/31-03-2022-b
# ls /mnt/zroot

Backup    etc_old    media    opt    var

I suppose that if I remove everything from that snapshot,it will still remains full. Is my guess true ?
 
space available = only 25GB,but I have removed every file from the snapshot. I want to know which kind of data there is inside and how can I free it.

Screenshot_2022-12-25_19-06-52.png
 
Code:
# zfs get -r used zroot/ROOT

NAME                                           PROPERTY  VALUE  SOURCE

zroot/ROOT                                     used      326G   -
zroot/ROOT/31-03-2022-b                        used      326G   -
zroot/ROOT/31-03-2022-b@2022-12-18-11:05:24-0  used      326G   -

# zfs mount zroot/ROOT/31-03-2022-b@2022-12-18-11:05:24-0

cannot open 'zroot/ROOT/31-03-2022-b@2022-12-18-11:05:24-0': snapshot delimiter '@' is not expected here
 
I suppose that if I remove everything from that snapshot,it will still remains full. Is my guess true ?
You can’t modify the snapshot. It refers to what was on the filesystem at the time of the snapshot. You can either keep it (and the ability to access that version of the filesystem) or delete it and free no longer referenced space.

Note snapshots are also used as the basis for zfs clones. (Clones will have the origin property set to their, well, origin.) While you can delete all the files within a clone, the data required for the snapshot is still immutable, so you aren’t freeing that space.

If there are snapshots or clones you can identify that aren’t needed, delete those, and then set the bootfs property for the one you want. One you manage to reboot into a working boot environment, you can use bectl or beadm from ports to manage them.)

You can use zfs promote to determine who owns the storage, and eventually (through removal of snapshots and any clones relying on them) free old space that is no longer used.
 
I have removed all the snapshots except the "31-03-2022-b" and I've copied again the previous deleted files to it again and then I tried to boot FreeBSD :

WhatsApp Image 2022-12-26 at 13.05.20.jpeg


AKA :

unable to remount devfs under /dev (error 2)
unable to unlink dev/dev (error 2)


Is there still something that I can do to boot it again ?
 
Last edited:
Try to drop into a shell, pressing a function Key.
If you can't, boot with an USB stick, drop in a shell.
Try zpool import XXX
zfs mount -a
 
Code:
# zpool import -f -R /mnt/zroot zroot

# zfs list

NAME                      USED  AVAIL     REFER  MOUNTPOINT

zroot                     311G   135G       96K  /mnt/zroot/zroot
zroot/ROOT                300G   135G       96K  none
zroot/ROOT/31-03-2022-b   300G   135G      300G  /mnt/zroot
zroot/tmp                15.0M   135G     15.0M  /mnt/zroot/tmp
zroot/usr                8.71G   135G      120K  /mnt/zroot/usr
zroot/usr/home            192K   135G      192K  /mnt/zroot/usr/home
zroot/usr/ports          8.71G   135G     8.71G  /mnt/zroot/usr/ports
zroot/usr/src-             96K   135G       96K  /mnt/zroot/usr/src-
zroot/var                2.46G   135G      136K  /mnt/zroot/var
zroot/var/audit            96K   135G       96K  /mnt/zroot/var/audit
zroot/var/crash          1.11G   135G     1.11G  /mnt/zroot/var/crash
zroot/var/log            4.09M   135G     4.09M  /mnt/zroot/var/log
zroot/var/mail           1.33G   135G     1.33G  /mnt/zroot/var/mail
zroot/var/tmp            18.1M   135G     18.1M  /mnt/zroot/var/tmp

# zfs mount zroot/ROOT/31-03-2022-b

# cd /mnt/zroot

# ls

2022-12-18-11:05:24-0    etc_old            mnt            share
Backup            home            net            sys
bin            home-backup        opt            tmp
boot            lib            proc            usr
compat            lib64            rescue            vms
data            libexec            root            zroot
etc            media            sbin            zroot2

# zfs mount -a
 
Normally you should have something like:
zfs list
Code:
myzpool                                         127G   192G    96K  none
myzpool/ROOT                                   53.6G   192G    96K  none
myzpool/ROOT/default                           53.6G   192G  53.6G  legacy
 
I'm using FreeBSD installed on the UFS partition. I have imported the zpool from there. Is this correct ?
 
Note: I use the bootloader on a UFS partitionto to boot zfs-on-root-kernel&filesystem.
You must make a choice :
- where the kernel resides you want to boot, UFS or ZFS
- which root filesystem you want to use, the one on UFS or ZFS.
And configure /boot/loader.conf on UFS accordingly.
Relevant fields in loader.conf could be:
Code:
### BOOT-DEVICE
#loaddev="disk2p9:"
### Selects the default device to loader the kernel from
currdev="zfs:ZT/ROOT/default:"
### Specify the root partition to mount
vfs.root.mountfrom="zfs:ZT/ROOT/default"
 
Unfortunately it didn't work both adding those lines that without :

Code:
currdev="zfs:zroot/ROOT/31-03-2022-b"
vfs.root.mountfrom="zfs:zroot/ROOT/31-03-2022-b"

I have also detached every USB disk from the PC,but that error is still there. What can I do now ?

Taking in consideration that my first two disks are the following :

Code:
# gpart show

=>        34  1953525101  nvd0  GPT  (932G)
          34        2014        - free -  (1.0M)
        2048     1748992     1  efi  (854M)
     1751040   921985024     2  ms-basic-data  (440G)
   923736064       32768     3  ms-reserved  (16M)
   923768832   191490048        - free -  (91G)
  1115258880   833185547     4  ms-basic-data  (397G)
  1948444427         245        - free -  (123K)
  1948444672     1318912     5  ms-recovery  (644M)
  1949763584        2048        - free -  (1.0M)
  1949765632     1310720     6  ms-recovery  (640M)
  1951076352        2048        - free -  (1.0M)
  1951078400     1265657     7  ms-basic-data  (618M)
  1952344057           7        - free -  (3.5K)
  1952344064     1179641     8  ms-basic-data  (576M)
  1953523705        1430        - free -  (715K)

=>       40  976773095  ada0  GPT  (466G)
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528    4194304     3  freebsd-swap  (2.0G)
    4728832  972044288     4  freebsd-zfs  (464G)
  976773120         15        - free -  (7.5K)

and that freebsd is installed on the disk ada0,maybe it works if I add :

Code:
loaddev="disk2p1:"

to /boot/loader.conf ?
 
Same error. Even booting in single user mode. Which modifications I should do to boot that system as it was installed in a UFS disk ?
 
I didn't hope to bring the old FreeBSD installation (almost) to live. This was a complicated scenario. But anyway,FreeBSD is built in a such old / traditional way that it's still able to be fixed using relatively easy procedures. This does not always happens with Linux.
 
I didn't fix it by myself only. You gave me a lot of help,as always. Yes,I think that it is the architecture that is kept simple,that helps a lot. Linux evolved in something more complicated because the industries asked to add more and more features to it. While they aren't so demanding with the FreeBSD developers. This place is the most important place where to ask and find useful answers. If this place will shut down,I think that the closure will make a damage to FreeBSD because the amount of geeks who want to try the OS will decrease a lot.
 
Back
Top