ZFS: df shows data usage when filesystem is empty

Hello,

First of all, my apologies if this has been discussed before, but I couldn't find anything relevant in a search.

Here is the problem: I have a ZFS filesystem, and it is empty, but ZFS seems to think there is data on it.

Code:
zpool                  809418831  78342692 731076139    10%    /zpool

mcj@ark /zpool % du
2       .
mcj@ark /zpool % find .
.
mcj@ark /zpool % ls -lart
total 4
drwxr-xr-x   2 root  wheel    2 May 21 06:59 ./
drwxr-xr-x  22 root  wheel  512 May 21 18:01 ../
mcj@ark /zpool % zfs list /zpool
NAME    USED  AVAIL  REFER  MOUNTPOINT
zpool   677G   697G  74.7G  /zpool

Now for the background information. This is on RELENG_8, cvsup'd and buildworlded Tuesday, May 18. It is on amd64. This zpool is currently running as a single disk, although I plan to mirror it; there is already another drive in the system, but I want to get this sorted out before attaching it.

Code:
mcj@ark /zpool % zpool status
  pool: zpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zpool       ONLINE       0     0     0
          ad6       ONLINE       0     0     0

errors: No known data errors

The interesting thing is how I got here. I have smartd running, and it reported a bad sector. I did a scrub, and found which file was corrupted. I was able to remove the file and restore it from backup, but zfs still showed that there was an error on the filesystem, but it wasn't associated with a file. So, I did some reading, and found some suggestions to basically zero out the sector and allow the drive to mark it as bad. This is a fairly new disk, so I'm not really concerned about it being an overall failure...if I see any more pop up in the near future, I'll replace it. Anyway, I used mkfile to create an 800GB empty file in order to fill up the drive.

The good news is, it seems to have worked; a scrub reports no errors, and a "long" smartctl test reports it as clean:

Code:
mcj@ark /zpool % sudo smartctl -l selftest /dev/ad6
smartctl 5.39.1 2010-01-28 r3054 [FreeBSD 8.1-PRERELEASE amd64] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%      1207         -
# 2  Short offline       Completed without error       00%      1189         -
# 3  Conveyance offline  Completed without error       00%      1176         -
# 4  Extended offline    Completed: read failure       90%      1172         18547528

The file I created was called "/zpool/huge". The file created successfully, and seemed to rm successfully, but as you can see, df still reports 75GB used even though the filesystem is empty.

So, has anyone seen this before? I am fairly sure that I can use the disk I have set aside as a mirror device to make a new pool and copy everything over, then destroy the original pool, then have the original disk mirror the new one, but frankly, that's a bit of a pain. :)

One more curious thing - I found this problem addressed on the OpenSolaris Forums (but no solution was given), and one of the suggested steps was to run "# zdb -dddd <name of your pool>/<name of your fs>" When I run that, I get the following result:

Code:
mcj@ark /zpool % sudo zdb -dddd zpool/
Assertion failed: (next[0] != '\0'), file /usr/src/cddl/lib/libzpool/../../../sy
s/cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dir.c, line 341.
zsh: abort (core dumped)  sudo zdb -dddd zpool/

If I just run "zdb -dddd zpool", it starts listing debug information for every single file in the pool, of which there are many, so that output is basically useless to me.

My apologies for this extremely long post, but I was hoping to give enough information the first time around. Any adive would be greatly appreciated.
 
Back
Top