ZFS Full Filesystem

After a crash of my server (full swap) i have a strange behavior in the storrage system.
The system reports that the file-system is full...

Code:
# df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default     81G     81G      0B   100%    /
devfs                 1.0K    1.0K      0B   100%    /dev
tmpfs                  20M    4.0K     20M     0%    /tmp
zroot                  96K     96K      0B   100%    /zroot
zroot/var/crash        96K     96K      0B   100%    /var/crash
zroot/usr/src          96K     96K      0B   100%    /usr/src
zroot/var/audit        96K     96K      0B   100%    /var/audit
zroot/tmp             376K    376K      0B   100%    /tmp
zroot/usr/ports        96K     96K      0B   100%    /usr/ports
zroot/usr/home         96K     96K      0B   100%    /usr/home
zroot/var/mail        172K    172K      0B   100%    /var/mail
zroot/var/tmp         112K    112K      0B   100%    /var/tmp
zroot/var/log          71M     71M      0B   100%    /var/log
tmpfs                  32M    228K     32M     1%    /var

but i don't see any file in the / that has this size

Code:
# du -h -d 1 /
512B    /net
2.2M    /etc
512B    /proc
8.6M    /root
960K    /bin
7.0G    /usr
8.7M    /lib
102M    /boot
512B    /media
4.3M    /sbin
512B    /mnt
210K    /tmp
512B    /zroot
185K    /libexec
8.3M    /rescue
3.5K    /dev
204K    /var
7.1G    /

can anybody tell me how to "fix" this? is this a swap-problem or a zfs?
 
Swap isn't stored in the filesystem, it's on a separate partition. A full swap won't crash the server either (it would start killing random processes). So something else is going on.
 
Code:
# zfs list -o space
NAME                AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
zroot                  0B  80.9G        0B     96K             0B      80.9G
zroot/ROOT             0B  80.8G        0B     96K             0B      80.8G
zroot/ROOT/default     0B  80.8G        0B   80.8G             0B         0B
zroot/tmp              0B   376K        0B    376K             0B         0B
zroot/usr              0B   384K        0B     96K             0B       288K
zroot/usr/home         0B    96K        0B     96K             0B         0B
zroot/usr/ports        0B    96K        0B     96K             0B         0B
zroot/usr/src          0B    96K        0B     96K             0B         0B
zroot/var              0B  71.3M        0B     96K             0B      71.2M
zroot/var/audit        0B    96K        0B     96K             0B         0B
zroot/var/crash        0B    96K        0B     96K             0B         0B
zroot/var/log          0B  70.8M        0B   70.8M             0B         0B
zroot/var/mail         0B   172K        0B    172K             0B         0B
zroot/var/tmp          0B   112K        0B    112K             0B         0B


and

Code:
# zpool  list -v
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot      83.5G  80.9G  2.59G        -         -    86%    96%  1.00x    ONLINE  -
  ada0p3   83.5G  80.9G  2.59G        -         -    86%  96.9%      -    ONLINE
 
Your root-dataset&boot-environment is full.
Code:
zroot/ROOT/default     0B  80.8G        0B   80.8G
A little bit of space you can get by
Code:
pkg clean -a
What's the output of:
Code:
du  -hs /usr/local/ /usr/home/*
 
Things that don't have an explicit data set usually wind up under /, so things /var/db is actually under / or "root dataset".
Start with "pkg clean", see if that clears up any, then start looking for snapshots. zfs list -t snapshot

You say "Server", what is it serving? Is there a database of some sort that could be using all the space?
 
yes, i know that it is full but i dont see where the "big" file is:

Code:
# du -hs /*
4.5K    /COPYRIGHT
960K    /bin
102M    /boot
3.5K    /dev
2.2M    /etc
8.7M    /lib
185K    /libexec
512B    /media
512B    /mnt
512B    /net
512B    /proc
8.3M    /rescue
8.6M    /root
4.3M    /sbin
512B    /sys
210K    /tmp
7.0G    /usr
228K    /var
512B    /zroot
 
What version of FreeBSD is this running?
I'm assuming you've rebooted the system, is that correct?
Have you rebooted, stopped in single user mode and run zpool scrub?
What is the device, an SSD or spinning disk? If SSD, have you tried being in single user mode and running zpool trim?
 
What version of FreeBSD is this running?
I'm assuming you've rebooted the system, is that correct?
Have you rebooted, stopped in single user mode and run zpool scrub?
What is the device, an SSD or spinning disk? If SSD, have you tried being in single user mode and running zpool trim?
Version 13.2, yes i rebooted the system. i will check the single-user inputs... it's an SSD on a virtualized vmware-pool
 
[This might take some time]

You need to find out where the disk-space-is-eaten on the "zroot/ROOT/default" dataset.
root@cacti-isam-2:~ # du -hs /
7.1G /
root@cacti-isam-2:~ # du -hs /usr
7.0G /usr
root@cacti-isam-2:~ # du -hs /var
204K /var
root@cacti-isam-2:~ # du -hs /root
8.6M /root
root@cacti-isam-2:~ # du -hs /usr/local/
6.2G /usr/local/
root@cacti-isam-2:~ # du -hs /usr/home*
512B /usr/home
root@cacti-isam-2:~ # du -hs /usr/home*
 
probably snapshots or a checkpoint ... what does "zfs list -t snap" say? Any checkpoints? what does "zpool status zroot"
 
SirDice thanks, I thought we had seen similar symptom recently. The other thread has the zpool created on vtbd0p2 Is this a virtualized device?
If it is, the OP answer up in #13 may be a common point. My understanding of the OP answer there is "FreeBSD is running under VMWare" I could be wrong, but I just don't know if both cases are related to virtualization.
 
Thanks; just to make sure I'm thinking correctly, FreeBSD is running under VMWare?
yes. it's a virtual machine on a vmware cluster on ssd disk


SirDice thanks, I thought we had seen similar symptom recently. The other thread has the zpool created on vtbd0p2 Is this a virtualized device?
If it is, the OP answer up in #13 may be a common point. My understanding of the OP answer there is "FreeBSD is running under VMWare" I could be wrong, but I just don't know if both cases are related to virtualization.
Thanks; just to make sure I'm thinking correctly, FreeBSD is running under VMWare?
 
Code:
NAME                PROPERTY              VALUE                  SOURCE
zroot/ROOT/default  type                  filesystem             -
zroot/ROOT/default  creation              Mon Feb 13  7:19 2023  -
zroot/ROOT/default  used                  80.8G                  -
zroot/ROOT/default  available             0B                     -
zroot/ROOT/default  referenced            80.8G                  -
zroot/ROOT/default  compressratio         4.38x                  -
zroot/ROOT/default  mounted               yes                    -
zroot/ROOT/default  quota                 none                   default
zroot/ROOT/default  reservation           none                   default
zroot/ROOT/default  recordsize            128K                   default
zroot/ROOT/default  mountpoint            /                      local
zroot/ROOT/default  sharenfs              off                    default
zroot/ROOT/default  checksum              on                     default
zroot/ROOT/default  compression           lz4                    inherited from zroot
zroot/ROOT/default  atime                 off                    inherited from zroot
zroot/ROOT/default  devices               on                     default
zroot/ROOT/default  exec                  on                     default
zroot/ROOT/default  setuid                on                     default
zroot/ROOT/default  readonly              off                    default
zroot/ROOT/default  jailed                off                    default
zroot/ROOT/default  snapdir               hidden                 default
zroot/ROOT/default  aclmode               discard                default
zroot/ROOT/default  aclinherit            restricted             default
zroot/ROOT/default  createtxg             8                      -
zroot/ROOT/default  canmount              noauto                 local
zroot/ROOT/default  xattr                 on                     default
zroot/ROOT/default  copies                1                      default
zroot/ROOT/default  version               5                      -
zroot/ROOT/default  utf8only              off                    -
zroot/ROOT/default  normalization         none                   -
zroot/ROOT/default  casesensitivity       sensitive              -
zroot/ROOT/default  vscan                 off                    default
zroot/ROOT/default  nbmand                off                    default
zroot/ROOT/default  sharesmb              off                    default
zroot/ROOT/default  refquota              none                   default
zroot/ROOT/default  refreservation        none                   default
zroot/ROOT/default  guid                  11657120308078321219   -
zroot/ROOT/default  primarycache          all                    default
zroot/ROOT/default  secondarycache        all                    default
zroot/ROOT/default  usedbysnapshots       0B                     -
zroot/ROOT/default  usedbydataset         80.8G                  -
zroot/ROOT/default  usedbychildren        0B                     -
zroot/ROOT/default  usedbyrefreservation  0B                     -
zroot/ROOT/default  logbias               latency                default
zroot/ROOT/default  objsetid              267                    -
zroot/ROOT/default  dedup                 off                    default
zroot/ROOT/default  mlslabel              none                   default
zroot/ROOT/default  sync                  standard               default
zroot/ROOT/default  dnodesize             legacy                 default
zroot/ROOT/default  refcompressratio      4.38x                  -
zroot/ROOT/default  written               80.8G                  -
zroot/ROOT/default  logicalused           348G                   -
zroot/ROOT/default  logicalreferenced     348G                   -
zroot/ROOT/default  volmode               default                default
zroot/ROOT/default  filesystem_limit      none                   default
zroot/ROOT/default  snapshot_limit        none                   default
zroot/ROOT/default  filesystem_count      none                   default
zroot/ROOT/default  snapshot_count        none                   default
zroot/ROOT/default  snapdev               hidden                 default
zroot/ROOT/default  acltype               nfsv4                  default
zroot/ROOT/default  context               none                   default
zroot/ROOT/default  fscontext             none                   default
zroot/ROOT/default  defcontext            none                   default
zroot/ROOT/default  rootcontext           none                   default
zroot/ROOT/default  relatime              off                    default
zroot/ROOT/default  redundant_metadata    all                    default
zroot/ROOT/default  overlay               on                     default
zroot/ROOT/default  encryption            off                    default
zroot/ROOT/default  keylocation           none                   default
zroot/ROOT/default  keyformat             none                   default
zroot/ROOT/default  pbkdf2iters           0                      default
zroot/ROOT/default  special_small_blocks  0                      default
 
This is getting more interesting. The actual data size is 348G without compression. Can you unhide the snapdev and see if there's anything in it. You can use zfs set snapdev=visible zroot/ROOT/default to unhide it. This will unhide the snapshot directory (.zfs/snapshot) so you can see if there's anything in it even if the zfs is reporting that there's no snapshots (snapshot_count none).
Also if previously you were using send/receive they may be unfished receive_resume_token which can take space in this dataset.
 
Code:
# zfs set snapdev=visible zroot/ROOT/default
# cd /.zfs
/.zfs # ls -l
total 0
dr-xr-xr-x+ 2 root  wheel  2 Jan  1  1970 snapshot
/.zfs # cd snapshot/
root@cacti-isam-2:/.zfs/snapshot # ls -l
total 0
 
Back
Top