Weird disk usage/free space in ZFS?

Greetings.

In my root "/", I type; du -hsx * | sort -rh | head -10 and get;


Code:
# cd /
# du -hsx * | sort -rh | head -10
476G    root
 84G    usr
3.1G    var
119M    boot
8.7M    lib
8.3M    rescue
4.3M    sbin
3.9M    tmp
2.2M    etc
960K    bin

being totally 563,1 GB disk usage in my system. However;

And zpool command reports:

Code:
# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  1.80T  1.35T   453G        -         -    12%    75%  1.00x    ONLINE  -

So it seems amount of 1.35 TB is in use and only 453 GB left? How? What do I miss here?

P.S.: I have 2 TB (2x1TB nvme disks in ZFS stripe) disk space in total.

Thanks.
 
Last edited by a moderator:
So it seems I got a huge snapshot file, probably that's taking the free space?

Code:
# zfs list -t snapshot
NAME                                       USED  AVAIL     REFER  MOUNTPOINT
zroot/ROOT/default@2022-08-10-01:21:08-0   823G      -     1007G  -

But the matter is; I didn't take any snapshot, I don't remember activating such a thing at all. And it has a recent date as snapshot name (..2022-08-10-01:21:08), is it possible to know how this one was created and by what?

Would freebsd-update fetch install trigger that?
 
Last edited by a moderator:
bectl list reports;
Code:
# bectl list
BE                             Active Mountpoint Space Created
13.1-RELEASE_2022-08-10_012108 -      -          823G  2022-08-10 01:21
default                        NR     /          1.35T 2022-02-08 01:43

so it seems freebsd-update triggered taking a snapshot of 823GB of size?!

And when I type; zfs destroy -nv zroot/ROOT/default@2022-08-10-01:20:24-0 it says:
Code:
cannot destroy 'zroot/ROOT/default@2022-08-10-01:20:24-0': snapshot has dependent clones
use '-R' to destroy the following datasets: zroot/ROOT/13.1-RELEASE_2022-08-10_012024
 
Last edited by a moderator:
I can suggest you a small program that has a special function (zfspurge) to make it easier to delete snapshots
But I have learned that it is better not to :)
If you are "courius" the free space (on BSD) is just an estimante, about this one
Code:
    struct statfs stat;
    if (statfs(i_path.c_str(),&stat)!=0) 
        return 0;
    static long blocksize = 0;
    int dummy;
    if (blocksize == 0)
        getbsize(&dummy, &blocksize);
    return  fsbtoblk(stat.f_bavail,
    stat.f_bsize, blocksize)*1024;
 
bectl list reports;
# bectl list
BE Active Mountpoint Space Created
13.1-RELEASE_2022-08-10_012108 - - 823G 2022-08-10 01:21
default NR / 1.35T 2022-02-08 01:43


so it seems freebsd-update triggered taking a snapshot of 823GB of size?!

And when I type; zfs destroy -nv zroot/ROOT/default@2022-08-10-01:20:24-0 it says: cannot destroy 'zroot/ROOT/default@2022-08-10-01:20:24-0': snapshot has dependent clones
use '-R' to destroy the following datasets: zroot/ROOT/13.1-RELEASE_2022-08-10_012024
Ahh, you updated to 13.1 release from something before that, perhaps a 12.x?
Boot Environments (BE's) are clones of snapshots.
Right now you are booted into a be named "default" and it's set for your next boot.
If that is the one you want, then bectl destroy -o 13.1-RELEASE_2022-08-10_012108 to get rid of the boot environment and it's clones/snapshots
 
Last edited by a moderator:
Ahh, you updated to 13.1 release from something before that, perhaps a 12.x?
Boot Environments (BE's) are clones of snapshots.
Right now you are booted into a be named "default" and it's set for your next boot.
If that is the one you want, then "bectl destroy -o 13.1-RELEASE_2022-08-10_012108" to get rid of the boot environment and it's clones/snapshots

Thanks.

Well not really 12, it was from 13.0-R.

So bectl destroy -o 13.1-RELEASE_2022-08-10_012108 would remove the snapshot coming from the zfs list -t snapshot output as well, I assume, right?
 
Last edited by a moderator:
  • Like
Reactions: mer
It should. The snapshot/clone is basically the differences between 13.0-R and the current 13.1-R (I'm assuming that's what's in default at the moment) which would be mostly 13.0-R.
 
Commencing with the upgrade to FreeBSD 13.1-RELEASE-p1, the freebsd-update(8) procedure automatically creates a backup copy of the initial boot environment.

There has been complaint about this "feature", not because it's a bad idea, but because it's an undocumented surprise!
 
I registered just to reply and say this is "feature" bit my hard today. Still a noob to FreeBSD (convert from RHEL), but this is not the kind of unannounced change I expect from the FreeBSD community and I'm disappointed.

Creating snapshots, even small ones, in environments that run tight on disk space is a *really* bad idea. I understand why they're doing it, but not documenting this well enough that users run into this situation is a poor experience.

Most of my FreeBSD servers run Usenet services, but they use cyclical buffers on the filesystem that the news server uses to roll over so it never runs out of disk space and I never have to look at it. I generally don't have even a few GBs of free space because I don't need it. After updating yesterday and rebooting, sometime this morning several servers filled up and it took me a few hours to figure out where the space had gone that I knew I had not consumed.
 
Back
Top