Files lost but storage not released in ZFS...

Hi!

I am new to FreeBSD and ZFS. I have installed FreeBSD 8.2 amd64 on a machine with 8GB RAM with one 130 GB boot disk (I used only a swap 16Gb and a single / partition for everything else).

I created a zfs "tank" (with four 2TB disks configured as two mirror pairs). I created a tank/ports file system that I mounted under /usr (after filling it with the ports data). I managed to build Samba, emacs etc with decent performance and no problems.

I managed to copy some data from my Windows boxes (using Samba) to the BSD box. To see that things where stable and to see what performance I would get I did some stress testing by copying some fairly large files and directories within the zfs system with no apparent problems.

Today when I booted up the system all the files in "tank" where missing (including ports fs that also is empty). There is no indication that some disks are off-line or indeed that any data would have been lost and the space occupied by the files is still used (I remember the sizes from yesterday). I tried doing a scrub just in case to see if it would find anything but nothing.

Attached are the outputs from list and status commands.

Luckily enough I had no important data (the ports can be downloaded again) but before I trust the FreeBSD port of ZFS with "real data" I would like to find out what may have happened so I can avoid it in the future.

/Trist
 

Attachments

How exactly did you mount your ZFS datasets ?
Also please show the output of:

[CMD=""]#mount[/CMD]

[CMD=""]#zfs list[/CMD]
 
Morde info on lost files...

Thanks for fast reply!

For mounting I used the automatic zfs mount for "tank" (as /tank) and I explicitly mounted ports (using zfs set mountpoint=) under /usr.

After the "failure" the ports file system seems gone as well (zfs mount returns nothing). The regular mount command only shows my root disk and the dev file systems in their regular places (expected since I did not use "legacy" mounting for any of the zfs stuff).

The output of the list and status commands was attached as files to the last post.

/Trist
 
Problem solved - kind of....

After re-installing FreeBSD and performing a zpool import tank the files (and the ports flesystem) are back.

Is it possible that building ports (in my case emacs and samba) may re-build something that zfs is dependent on with new configuration parameters or versions or is zfs only relying on "kernel stuff"?

That is anyhow the only thing I can remember doing between the time I had zfs working and not working ...

/Trist
 
Building Samba 3.5 seems to causes ZFS to fail...

I can now confirm that building the Samba 35 port (I used the command line make -DBATCH clean install) causes the problem I reported in this thread (after re-booting). I doubt that this has anything to do with me performing the build process in a directory structure located on a ZFS filesystem but rather some library version issue etc.

To come to this conclusion I installed a fresh minimal FreeBSD install, imported my ZFS data, that worked ok, built the Samba 35 port from sources, tested that ZFS still worked and finally re-booted and when logging in again the files are once again "invisible" in ZFS.

Perhaps somebody more knowledgeable about FreeDSD and the ZFS port can explain what may be going on here and how I best can avoid this issue.

Best Regards
Trist
 
@tristpost,

This is not related to the samba35 port. It should also be irrelevant to ZFS. That is why I asked you how did you create/mount your pool/datasets before. If you installed FreeBSD from sysinstall you probably already have a /usr/ports directory, therefore it is important to know exactly the steps you took for:

1) Installation (sysinstall, custom, etc)

2) Creation of tank and how did you mount it.

3) Output of:

[CMD=""]#zfs list [/CMD]
[CMD=""]#zpool get all tank[/CMD]
[CMD=""]#mount[/CMD]

Before you reinstall try issuing those commands. See also if exporting-importing the pool takes care of your problem.

Also, if your ZFS systems are to be mounted at boot time then you must have
Code:
zfs_load="YES"
in your /boot/loader.conf

Please don't use attachments, you can paste the results and use code(#) tags.
 
Thanks a lot gkontos for your suggestions!

The problem was indeed in the mounting at boot - now I will continue testing my zfs with more data!

Best Regards
Trist
 
ZFS dataset gets automatically dismounted after heavy use of this

*** ZFS dataset gets automatically dismounted after heavy use of this ***

Hi,

Sorry to dig this thread out, but the same behavior is observed regularly at my system, and I have some data to add for the future reader of this thread.

Specifically, I have created a dataset tank/.backupdir on a ZFS filesystem. In this folder end up all the cron/periodic backup jobs with rdiff-backup and tar, which means that there is a great strain/use of this. After some days of uptime, and after a large number of read/write has happened, the filesystem looses the catalog which holds all the rdiff backups. This space is occupied, but the files are nowhere to be found.

I have figured out that is due to the fact, that under heavy load the zfs dataset/mountpoint gets unmounted (only this, not every zfs dataset in my system).

I have found two solutions:
  • reboot the server, everything back to normal
  • do manually a zfs mount tank/.backupdir

I am attaching relevant info:
Code:
bigb5#    zfs list -t all -r
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank              556G  19.0G  51.5G  /tank
tank/.backupdir   187G  19.0G   187G  /tank/.backupdir
tank/_Programs   28.9G  19.0G  28.9G  /tank/_Programs
tank/jails       1.01G  19.0G  1.01G  /tank/jails
tank/tmp          137G  19.0G   137G  /tank/tmp
tank/virtualbox   151G  19.0G   151G  /tank/virtualbox
The tank/.backupdir is the folder that has missing files.
If I do a du -h /tank/.backupdir the space is only 9MB and not 187GB

This is my zfs pool:
Code:
bigb5#  zpool get all tank
   458  8:07    zfs list -t all -r
NAME  PROPERTY       VALUE       SOURCE
tank  size           584G        -
tank  used           556G        -
tank  available      28.2G       -
tank  capacity       95%         -
tank  altroot        -           default
tank  health         ONLINE      -
tank  guid           5683958145617742488  default
tank  version        15          default
tank  bootfs         -           default
tank  delegation     on          default
tank  autoreplace    off         default
tank  cachefile      -           default
tank  failmode       wait        default
tank  listsnapshots  off         default
I do not use snapshots.
Code:
bigb5# zfs list -t snapshot
no datasets available

These are my mount points for zfs:
Code:
bigb5# zfs mount
tank                            /tank
tank/_Programs                  /tank/_Programs
tank/jails                      /tank/jails
tank/tmp                        /tank/tmp
tank/virtualbox                 /tank/virtualbox
See that there is no zfs share tank/.backupdir even though it was there some hours ago.

When I discover this problem I have to do a [cmd=]zfs mount tank/.backupdir[/cmd] to mount explicitly this zfs share (perhaps a script automation will help me here) and then all my files are back:
Code:
bigb5# zfs mount
tank                            /tank
tank/_Programs                  /tank/_Programs
tank/jails                      /tank/jails
tank/tmp                        /tank/tmp
tank/virtualbox                 /tank/virtualbox
tank/.backupdir                 /tank/.backupdir

My system is:
Code:
bigb5# uname -a
FreeBSD XXXXXXXXX 8.2-STABLE FreeBSD 8.2-STABLE #2: Sun Mar 27 00:39:12 EET 2011     root@XXXXXXXXX:/tank/tmp/obj/usr/src/sys/bigb5  amd64

I know that I am referring to an old zfs version, but I just wanted to add to the freebsd FreeBSD forums my own experience.

Do you know if this is fixed in newer versions?
 
Back
Top