zfs (v28) not updating free space

Ok I am testing v28 on a vm and just deleted about 1-2 gig of data as didnt have enough space to compile gcc, and on both 'df' and 'zfs list' the reported used space has dropped but the free space has been static. Not completely static it seems to move up and down a bit but seems to have taken no account of what I have deleted. Does this sound like a bug? or have I missed something on how zfs works. The files deleted wont be linked to running processes, they are some world src files and port related files.
 
Not sure if it's the case but it might be caused by de-duplication. When several files share the same data, the data itself isn't removed when you remove one of those files. The other files still refer to that data so it isn't freed.
 
yeah I think you mean that refer column thats new to v28.

here is some output.
Code:
root@vm ~ # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
tank                6.75G   126M    22K  none
tank/root           6.74G   126M   277M  /
tank/root/tmp        235K   126M    37K  /tmp
tank/root/usr       6.36G   126M  2.19G  /usr
tank/root/usr/obj   1.50G   126M  1.47G  /usr/obj
tank/root/usr/src    577M   126M    31K  /usr/src
tank/root/usr/src1  1.26G   126M  1.13G  /usr/src1
tank/root/var        113M   126M  92.0M  /var

adding up the refer values falls short.
and
Code:
root@vm ~ # df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
tank/root             403M    277M    126M    69%    /
devfs                 1.0K    1.0K      0B   100%    /dev
tank/root/tmp         126M     37K    126M     0%    /tmp
tank/root/usr         2.3G    2.2G    126M    95%    /usr
tank/root/usr/obj     1.6G    1.5G    126M    92%    /usr/obj
tank/root/usr/src     126M     31K    126M     0%    /usr/src
tank/root/usr/src1    1.3G    1.1G    126M    90%    /usr/src1
tank/root/var         218M     92M    126M    42%    /var

root@vm ~ # zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  6.98G  6.76G   234M    96%  1.00x  ONLINE  -

I have already rebooted, no change. currently running a scrub.

on every file set this is listed.

Code:
tank  dedup                 off                    default

and on the pool

Code:
tank  dedupditto     0           default
tank  dedupratio     1.00x       -
 
ok I solved it, was down to snapshots, it seems for snapshots to work they must reserve space or something, when I deleted all my snapshots I went up to 1.7gig free.
 
Snapshots keep data no matter how much you delete on fs.
After file on FS and all snapshots, that contain parts of that file are deleted, HDD space is release to pool

Otherwise you wouldn't be able to restore precious data from snapshots, when something goes wrong.
 
understood, I have reduced my snapshot history now to 3 days from 14 since this vm has very small hdd.
 
I am using STABLE code thats about 1 week newer than when the patch was made available.

Code:
FreeBSD 8.2-PRERELEASE #0 r216674M: Fri Dec 24 00:24:34 UTC 2010

I had a panic whilst compilling gcc45, nothing was put in /var/crash tho. From the console (which I couldnt scroll either to see full message) it was g_event related. But bear in mind this VM currently only has 2 gig ram assigned to it even tho I have compensated in loader.conf on kmem etc. I am below the recommended ram for zfs.

panic1.png
 
chrcol said:
Ok I am testing v28 on a vm and just deleted about 1-2 gig of data as didnt have enough space to compile gcc, and on both 'df' and 'zfs list' the reported used space has dropped but the free space has been static.
Although you tagged this [Solved], I think I can still contribute something. I have some systems with huge amounts of RAM dedicated to ZFS (32GB on up to 128GB+). If I delete files, I see the same thing as you - free space doesn't increase. After scratching my head for a while, I remembered "sync - do it twice to make it nice" from the very early 4BSD days. Sure enough, after a couple sync commands, free space started going back up. It took a while as I'd deleted something like 20TB.
 
If my memory serves well, you are supposed to execute sync three times. :)

ZFS frees space asynchronously. There is also the issue with different free space in the pool and the filesystem:

Code:
$ zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
storage                        3.18T   400G  42.5K  /storage

$ zpool list
NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
storage  3.62T  3.18T   458G    87%  ONLINE  -
Usually at creation time the free space is more or less the same.
 
zpool list shows the total amount of raw storage available in the pool. Every byte from every disk is shown. A pool with a single 6-disk raidz2 vdev using 500 GB drives will show 3 TB.

zfs list shows the total amount of usable storage available for filesystems, after all the redundancies are taken into account. A pool with a single 6-disk raidz2 vdev using 500 GB drives will show 2 TB (2 disks used for parity).

If you use compression and/or dedupe on a filesystem, then things can get even more confusing as some tools will show the amount of disk space used (after compression) while others will show the amount of logical storage used (before compression).
 
Back
Top