ZFS: Where did the place?

Hello everyone!

I use ZFS as the root file system on FreeBSD machine. Here's how it looks:

Code:
blackbird# uname -srm
FreeBSD 8.2-RELEASE-p1 amd64
blackbird# zpool list zfsroot
NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zfsroot   460G   453G  7.19G    98%  ONLINE  -
blackbird# zpool status -v zfsroot
  pool: zfsroot
 state: ONLINE
 scrub: scrub completed after 8h0m with 0 errors on Sat Nov 19 00:42:04 2011
config:

	NAME           STATE     READ WRITE CKSUM
	zfsroot        ONLINE       0     0     0
	  gpt/zfsroot  ONLINE       0     0     0

errors: No known data errors
blackbird# gpart show              
=>       34  976770988  ad4  GPT  (466G)
         34        128    1  freebsd-boot  (64K)
        162    8388608    2  freebsd-swap  (4.0G)
    8388770  968382252    3  freebsd-zfs  (462G)

blackbird# zfs list -r zfsroot
NAME                  USED  AVAIL  REFER  MOUNTPOINT
zfsroot               453G      0  29.4G  legacy
zfsroot/backup        125G      0    25K  /backup
zfsroot/backup/mail   125G      0   125G  /backup/mail
zfsroot/tmp          10.7M      0  10.7M  /tmp
zfsroot/usr           298G      0   298G  /usr
zfsroot/usr/home      232M      0   232M  /usr/home
zfsroot/var           171M      0   710K  /var
zfsroot/var/db        170M      0   170M  /var/db
blackbird# zpool upgrade -v
This system is currently running ZFS pool version 14.

As can be seen above the pool with the name zfsroot, given the size of 460G entirely under the main system and is almost completely filled. It also shows that 298G uses /usr. But now, says that du:

Code:
blackbird# du -h -d 1 /usr
 21M	/usr/libexec
 83K	/usr/libdata
 67M	/usr/lib
 43M	/usr/share
547M	/usr/src
4.8G	/usr/local
 36M	/usr/bin
 55M	/usr/lib32
474M	/usr/ports
232M	/usr/home
106M	/usr/doc
 17M	/usr/include
1.5K	/usr/pool
262K	/usr/games
 21M	/usr/sbin
6.4G	/usr

As seen in the fact /usr uses 6.4G. Here is the full set of options for zfsroot/usr:

Code:
blackbird# zfs get all zfsroot/usr
NAME              PROPERTY              VALUE                                                           SOURCE
zfsroot/usr       type                  filesystem                                                      -
zfsroot/usr       creation              Tue Dec  1 22:35 2009                                           -
zfsroot/usr       used                  298G                                                            -
zfsroot/usr       available             0                                                               -
zfsroot/usr       referenced            298G                                                            -
zfsroot/usr       compressratio         1.00x                                                           -
zfsroot/usr       mounted               yes                                                             -
zfsroot/usr       quota                 none                                                            default
zfsroot/usr       reservation           none                                                            default
zfsroot/usr       recordsize            128K                                                            default
zfsroot/usr       mountpoint            /usr                                                            local
zfsroot/usr       sharenfs              -alldirs -maproot=root -network 10.110.0.0 -mask=255.255.255.0  local
zfsroot/usr       checksum              on                                                              default
zfsroot/usr       compression           off                                                             default
zfsroot/usr       atime                 on                                                              default
zfsroot/usr       devices               on                                                              default
zfsroot/usr       exec                  on                                                              default
zfsroot/usr       setuid                on                                                              default
zfsroot/usr       readonly              off                                                             inherited from zfsroot
zfsroot/usr       jailed                off                                                             default
zfsroot/usr       snapdir               hidden                                                          default
zfsroot/usr       aclmode               groupmask                                                       default
zfsroot/usr       aclinherit            restricted                                                      default
zfsroot/usr       canmount              on                                                              default
zfsroot/usr       shareiscsi            off                                                             default
zfsroot/usr       xattr                 off                                                             temporary
zfsroot/usr       copies                1                                                               default
zfsroot/usr       version               3                                                               -
zfsroot/usr       utf8only              off                                                             -
zfsroot/usr       normalization         none                                                            -
zfsroot/usr       casesensitivity       sensitive                                                       -
zfsroot/usr       vscan                 off                                                             default
zfsroot/usr       nbmand                off                                                             default
zfsroot/usr       sharesmb              off                                                             default
zfsroot/usr       refquota              none                                                            default
zfsroot/usr       refreservation        none                                                            default
zfsroot/usr       primarycache          all                                                             default
zfsroot/usr       secondarycache        all                                                             default
zfsroot/usr       usedbysnapshots       0                                                               -
zfsroot/usr       usedbydataset         298G                                                            -
zfsroot/usr       usedbychildren        232M                                                            -
zfsroot/usr/home  usedbyrefreservation  0                                                               -

Does anyone have any idea where could get away place?
 
What does "Where did the place?" and "Where could get away place?" mean? It is not really English..
 
DutchDaemon said:
What does "Where did the place?" and "Where could get away place?" mean? It is not really English..

Sorry for my English. I meant that the place in the dataset zfsroot/usr disappeared somewhere.
 
Do you mean that zfs list says there is 298 Gb in use on zfsroot/usr

Code:
blackbird# zfs list -r zfsroot
NAME                  USED  AVAIL  REFER  MOUNTPOINT
zfsroot               453G      0  29.4G  legacy
zfsroot/backup        125G      0    25K  /backup
zfsroot/backup/mail   125G      0   125G  /backup/mail
zfsroot/tmp          10.7M      0  10.7M  /tmp
zfsroot/usr           298G      0   298G  /usr
zfsroot/usr/home      232M      0   232M  /usr/home
zfsroot/var           171M      0   710K  /var
zfsroot/var/db        170M      0   170M  /var/db

But with du /usr it tells you that /usr is only taking 6.4

Code:
blackbird# du -h -d 1 /usr
 21M	/usr/libexec
 83K	/usr/libdata
 67M	/usr/lib
 43M	/usr/share
547M	/usr/src
4.8G	/usr/local
 36M	/usr/bin
 55M	/usr/lib32
474M	/usr/ports
232M	/usr/home
106M	/usr/doc
 17M	/usr/include
1.5K	/usr/pool
262K	/usr/games
 21M	/usr/sbin
6.4G	/usr

Could it be that you have a lot of snapshots? From your zfs values, it tells me that the snapshot dir is not visable, so I do think that du does not see that dir, and will not count the data in that dir.

Code:
zfsroot/usr       snapdir               hidden                                                          default

if you use the following command, you could set snapdit to visible, that way du will count .zfs also. I did not test it, but i would believe that is expected behaviour.

Code:
zfs set snapdir=visible zfsroot/usr

then a du should be able to see the .zfs dir and can count that space also.

Or try to go into /usr/.zfs and see what is in there!

regards
Johan Hendriks
 
Setting the snapdir flag to visible is usually a bad idea since you risk traversing the entire filesystem several times. Instead of going about with du(1) to find out the diskusage of snapshots, use the proper zfs() command.

% zfs list -t snapshot

Use the following to delete snapshots you don't need.

% zfs destroy <snapshot>
 
In the pool zfsroot none of the datasets do not have snapshots:

Code:
blackbird# zfs list -r -t snapshot zfsroot
no datasets available
blackbird# zfs list -r -t volume zfsroot 
no datasets available
 
t1066 said:
Could /usr/home be mounted over an older /usr/home?

No.

Code:
blackbird# zfs umount zfsroot/usr/home
blackbird# zfs mount
zfsroot                         /
zfsroot/backup                  /backup
zfsroot/backup/mail             /backup/mail
zfsroot/tmp                     /tmp
zfsroot/usr                     /usr
zfsroot/var                     /var
zfsroot/var/db                  /var/db
blackbird# du -sh /usr/home 
1.5K	/usr/home
 
Could it be a hole file?
I don't know if this is still actual:

On UFS, the du command reports the size of the data blocks within the file. On ZFS, du reports the actual size of the file as stored on disk. This size includes metadata as well as compression. This reporting really helps answer the question of "how much more space will I get if I remove this file?" So, even when compression is off, you will still see different results between ZFS and UFS.

so it could be compression enabled? Deduplication? It is a huge difference, but it could be...
 
fluca1978 said:
Could it be a hole file?
I don't know if this is still actual:



so it could be compression enabled? Deduplication? It is a huge difference, but it could be...

You were right. Exploring the file system, I discovered the following:

Code:
blackbird# ls -l /usr/local/fsbackup/cache
total 7081530
-rw-r--r--  1 root  wheel  17597015261184 Nov 22 02:00 .hash
-rw-r--r--  1 root  wheel           16384 Jun  6 02:02 .hash.last
-rw-r--r--  1 root  wheel            4096 Nov 22 13:00 .hash.swp
-rw-r--r--  1 root  wheel               0 Oct  3 02:01 bb_csu_ru.del
-rw-r--r--  1 root  wheel          155648 Nov 22 01:56 bb_csu_ru.dir
-rw-r--r--  1 root  wheel         1302528 Nov 22 02:00 bb_csu_ru.list
-rw-r--r--  1 root  wheel         1404928 Nov 22 01:59 bb_csu_ru.lsize
blackbird# ls -lh /usr/local/fsbackup/cache
total 7081530
-rw-r--r--  1 root  wheel    16T Nov 22 02:00 .hash
-rw-r--r--  1 root  wheel    16K Jun  6 02:02 .hash.last
-rw-r--r--  1 root  wheel   4.0K Nov 22 13:00 .hash.swp
-rw-r--r--  1 root  wheel     0B Oct  3 02:01 bb_csu_ru.del
-rw-r--r--  1 root  wheel   152K Nov 22 01:56 bb_csu_ru.dir
-rw-r--r--  1 root  wheel   1.2M Nov 22 02:00 bb_csu_ru.list
-rw-r--r--  1 root  wheel   1.3M Nov 22 01:59 bb_csu_ru.lsize
blackbird# cat /usr/local/fsbackup/cache/.hash | xxd | less
0000000: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000010: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000020: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000030: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000040: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000050: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000060: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000070: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000080: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000090: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000a0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000b0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000c0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000d0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000e0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000f0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000100: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000110: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000120: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000130: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000140: 0000 0000 0000 0000 0000 0000 0000 0000  ................
blackbird# du -sh /usr/local/fsbackup/cache
6.8G   /usr/local/fsbackup/cache

As you can see the file .hash contains zeros and it is size 16TB in accordance with ls. du returns 6.8G. I deleted this file, but the situation has not changed.

Code:
blackbird# zfs list -r zfsroot
NAME                  USED  AVAIL  REFER  MOUNTPOINT
zfsroot               434G  18.3G  5.37G  legacy
zfsroot/backup        129G  18.3G    25K  /backup
zfsroot/backup/mail   129G  18.3G   129G  /backup/mail
zfsroot/tmp          10.7M  18.3G  10.7M  /tmp
zfsroot/usr           300G  18.3G   300G  /usr
zfsroot/usr/home      140M  18.3G   140M  /usr/home
zfsroot/var           171M  18.3G   770K  /var
zfsroot/var/db        170M  18.3G   170M  /var/db
blackbird# du -h -d 1 /usr
21M   /usr/libexec
83K   /usr/libdata
67M   /usr/lib
43M   /usr/share
547M   /usr/src
112M   /usr/local
36M   /usr/bin
55M   /usr/lib32
485M   /usr/ports
140M   /usr/home
106M   /usr/doc
17M   /usr/include
262K   /usr/games
21M   /usr/sbin
1.6G   /usr

I will deal further.

P.S. I do not use deduplication or compression in datasets in the zfsroot pool.
 
I see that your /usr is now increasing, from 298G to 300G, so even deleting the hole file did not improve the situation. If you are absolutely sure there is no snapshotting, there is no compression and deduplication I have no more ideas. I will do a find on the filesystem to see if there are strange links not correctly reported by du, but I don't think this is your case. Is there any chance you can export the filesystem and reimport it on another media just to see if it is effectively so huge?
 
A maybe silliy question: Did you reboot the machine recently?
I would assume that maybe some files are still open but already deleted and thus the space is not reclaimed as long as some daemon holds a handle. If you have not done so already, please try that or try lsof and check if someone holds handles to files already deleted.
 
Crivens said:
A maybe silliy question: Did you reboot the machine recently?
I would assume that maybe some files are still open but already deleted and thus the space is not reclaimed as long as some daemon holds a handle. If you have not done so already, please try that or try lsof and check if someone holds handles to files already deleted.

Yes, I reboot my system into single user mode. But nothing has changed:

Code:
blackbird# zfs mount -a
blackbird# zfs list -r zfsroot
NAME                  USED  AVAIL  REFER  MOUNTPOINT
zfsroot               436G  16.7G  5.38G  legacy
zfsroot/backup        130G  16.7G    25K  /backup
zfsroot/backup/mail   130G  16.7G   130G  /backup/mail
zfsroot/tmp          10.7M  16.7G  10.7M  /tmp
zfsroot/usr           300G  16.7G   300G  /usr
zfsroot/usr/home      140M  16.7G   140M  /usr/home
zfsroot/var           172M  16.7G  1.48M  /var
zfsroot/var/db        170M  16.7G   170M  /var/db
blackbird# du -h -d /usr 
4.6G	.

fluca1978 said:
I see that your /usr is now increasing, from 298G to 300G, so even deleting the hole file did not improve the situation. If you are absolutely sure there is no snapshotting, there is no compression and deduplication I have no more ideas. I will do a find on the filesystem to see if there are strange links not correctly reported by du, but I don't think this is your case. Is there any chance you can export the filesystem and reimport it on another media just to see if it is effectively so huge?

I will try to copy the contents of /usr, destroy and re-create zfsroot/usr dataset, and copy everything back. Let's see what will change.
 
In general, the reason for the disappearance of places I have not figured out, but deal with the consequences:

Code:
[gilgamesh@blackbird ~]$ zfs list -r zfsroot
NAME                  USED  AVAIL  REFER  MOUNTPOINT
zfsroot               138G   315G  7.08G  legacy
zfsroot/backup        130G   315G    25K  /backup
zfsroot/backup/mail   130G   315G   130G  /backup/mail
zfsroot/tmp          10.7M   315G  10.7M  /tmp
zfsroot/var           172M   315G  1.38M  /var
zfsroot/var/db        170M   315G   170M  /var/db

/usr is now owned by the root dataset zfsroot. zfsroot/usr and zfsroot/usr/home were destroyed. Lost 300G return. It is suspected that fsbackup is still partly responsible for the disappearance of the place, so I will be held corresponding to the test.
 
Back
Top