Solved What's eating the space

Hi,

I might sound as noob, but indeed I am:)
I have installed freebsdFreeBSD 10.2-STABLE on the file storage machine. The storage configuration I created is following:
Code:
NAME              STATE     READ WRITE CKSUM

    zroot             ONLINE       0     0     0

      mirror-0        ONLINE       0     0     0

        mfisyspd12p3  ONLINE       0     0     0

        mfisyspd13p3  ONLINE       0     0     0
As main pool where main system exists (100GB SSD), and:
Code:
    NAME           STATE     READ WRITE CKSUM

    vault1         ONLINE       0     0     0

      raidz2-0     ONLINE       0     0     0

        mfisyspd5  ONLINE       0     0     0

        mfisyspd4  ONLINE       0     0     0

        mfisyspd3  ONLINE       0     0     0

        mfisyspd2  ONLINE       0     0     0

        mfisyspd1  ONLINE       0     0     0

        mfisyspd0  ONLINE       0     0     0

    cache

      nvd0         ONLINE       0     0     0
Pool for files sharing and database (14TB):
Code:
    NAME            STATE     READ WRITE CKSUM

    vault2          ONLINE       0     0     0

      raidz2-0      ONLINE       0     0     0

        mfisyspd11  ONLINE       0     0     0

        mfisyspd10  ONLINE       0     0     0

        mfisyspd9   ONLINE       0     0     0

        mfisyspd8   ONLINE       0     0     0

        mfisyspd7   ONLINE       0     0     0

        mfisyspd6   ONLINE       0     0     0
Pool for measurement system data storage/backup (14TB).

aAt the moment system is having 2 jails, one for active directory (samba4) and one for CIFS (Samba 3.6 as samba4 had limitations elsewhere). Both jails are running on vault1 zpool and have own zfs dataset (jails do not seem to be expanding). For jail management ii'm using warden.

iIf I run zfs list I get following:
Code:
root@freebsd1:/var/log # zfs list

NAME                                               USED  AVAIL  REFER  MOUNTPOINT

vault1                                            43.6G  14.0T  49.0K  /vault1

vault1/domain                                     41.4G  14.0T  32.0K  /vault1/domain

vault1/domain/share                               27.2G  3.97T  27.1G  /vault1/domain/share

vault1/domain/temp                                8.38G   492G  8.38G  /vault1/domain/temp

vault1/domain/users                               5.83G  14.0T  34.0K  /vault1/domain/users

vault1/domain/users/andreri                        121M  3.88G   121M  /vault1/domain/users/andreri

vault1/domain/users/bolsavi                       32.0K  4.00G  32.0K  /vault1/domain/users/bolsavi

vault1/domain/users/bruksda                        306M  3.70G   306M  /vault1/domain/users/bruksda

vault1/domain/users/ch-sk-kvd                     2.78G  7.22G  2.78G  /vault1/domain/users/ch-sk-kvd

vault1/domain/users/gcmsms01                      32.0K  4.00G  32.0K  /vault1/domain/users/gcmsms01

vault1/domain/users/godlira                       36.6M  9.96G  36.6M  /vault1/domain/users/godlira

vault1/domain/users/jankaar                       59.9K  4.00G  59.9K  /vault1/domain/users/jankaar

vault1/domain/users/kaliako                       2.16M  2.00G  2.16M  /vault1/domain/users/kaliako

vault1/domain/users/karalge                       71.9K  4.00G  71.9K  /vault1/domain/users/karalge

vault1/domain/users/mateini                       32.0K  4.00G  32.0K  /vault1/domain/users/mateini

vault1/domain/users/miliabi                       32.0K  4.00G  32.0K  /vault1/domain/users/miliabi

vault1/domain/users/paskaer                        416K  4.00G   416K  /vault1/domain/users/paskaer

vault1/domain/users/pockeke                       32.0K  4.00G  32.0K  /vault1/domain/users/pockeke

vault1/domain/users/rimosma                       2.15G  1.85G  2.15G  /vault1/domain/users/rimosma

vault1/domain/users/sataija                        403M  3.61G   403M  /vault1/domain/users/sataija

vault1/domain/users/savicin                       15.5M  15.0G  15.5M  /vault1/domain/users/savicin

vault1/domain/users/stalado                       32.0K  4.00G  32.0K  /vault1/domain/users/stalado

vault1/domain/users/vaicivi                       32.0K  4.00G  32.0K  /vault1/domain/users/vaicivi

vault1/domain/users/voiteas                       13.9M  3.99G  13.9M  /vault1/domain/users/voiteas

vault1/domain/users/zdanovi                       12.1M  3.99G  12.1M  /vault1/domain/users/zdanovi

vault1/jails                                      2.24G  14.0T   237M  /vault1/jails

vault1/jails/.warden-template-10.2-RELEASE-amd64   200M  14.0T   197M  /vault1/jails/.warden-template-10.2-RELEASE-amd64

vault1/jails/.warden-template-10.2-STABLE-amd64    197M  14.0T   196M  /vault1/jails/.warden-template-10.2-STABLE-amd64

vault1/jails/.warden-template-fbsd10.0amd64        183M  14.0T   182M  /vault1/jails/.warden-template-fbsd10.0amd64

vault1/jails/filesrv                               348M  14.0T   487M  /vault1/jails/filesrv

vault1/jails/kestucio                              351M  14.0T   547M  /vault1/jails/kestucio

vault1/jails/samba                                 775M  14.0T   729M  /vault1/jails/samba

vault2                                             169G  13.9T  34.0K  /vault2

vault2/agilent                                    4.08G  13.9T  4.08G  /vault2/agilent

vault2/api_sciex                                  34.0K  13.9T  34.0K  /vault2/api_sciex

vault2/other                                      36.0K  13.9T  36.0K  /vault2/other

vault2/shimadzu_gc                                 139G  13.9T   139G  /vault2/shimadzu_gc

vault2/shimadzu_lc                                16.8G  13.9T  16.8G  /vault2/shimadzu_lc

vault2/waters                                     8.70G  13.9T  8.70G  /vault2/waters

zroot                                              106G      0    96K  none

zroot/ROOT                                        4.14G      0    96K  none

zroot/ROOT/default                                4.14G      0  4.14G  /

zroot/tmp                                          168K      0   168K  /tmp

zroot/usr                                         2.61G      0    96K  /usr

zroot/usr/home                                     192K      0   192K  /usr/home

zroot/usr/ports                                   1.51G      0  1.51G  /usr/ports

zroot/usr/src                                     1.10G      0  1.10G  /usr/src

zroot/var                                         98.8G      0    96K  /var

zroot/var/crash                                     96K      0    96K  /var/crash

zroot/var/log                                     98.8G      0  98.8G  /var/log

zroot/var/mail                                     284K      0   284K  /var/mail

zroot/var/tmp                                       96K      0    96K  /var/tmp

wWhile running du -h on /var and subdirs, I cannot figure out what's eating the space. I have noticed to have plenty of portsnap snapshots (about 1000 files) in the /var/db/portsnap/files directory.
So far snapshots for all datasets on the zroot pool are off.
As the disk is full, copy-on-write prevents me from deleting files....

Please help me sorting things out....

King regards,
Vytautas
 
Hopefully someone can fix the formatting on your post, it's really hard to read.

You have 98.8GB in /var/log, which is the majority of the 106GB on your root pool. If it's not showing up with other tools it's probably some file(s) that have been removed, but still held open by an application. Try restarting the system and see if the space is still used.
 
Hi,

I'm afraid system might not start afterwards due to the lack of disk space or this is not an issue...

KR

Vytautas
 
Well I obviously can't say for 100% certain that the system will reboot fine with a full pool, but I don't think I've heard of it happen yet.
 
Let's close the thing until weekend. As there are users working on that, I have timeslot to risk reinstallation only then. I shall come back with result.
Thanks again,

Vytautas
 
Just as another thought, are you aware of anything that has been creating huge logs in /var/log, or is there anything in there you've deleted to try to free space? If you can identify what created the files taking up all that space, just restarting that service may be enough to release it.
 
procstat -af | less and looking for huge OFFSET might reveal the guilty party.

Juha

Edit: fstat -m | sort -n +7 is better, catches those real nasties too :)
 
Last edited:
Hi Juha, Usdmatt,

In deed it helped. I identified some perl scribpts, initiated by webmin, were creating ghosts, with enormous offset, but no signature "NAME" in procstat -af | less output.
Restarting webmin solved issue.
 
Back
Top