Why did I need "zfs mount -a" to see my pool?

When I reboot my machine (SPARC), my files are not there in my zpool. zpool status comes back fine. I can get my data if I run zfs mount -a. Then everything seems to work.

This is an older fiber channel array, that can only be turned on after FreeBSD has booted. Otherwise, FreeBSD will crash.

Is there some way to tell FreeBSD to automount the zpool once the device has been detected?

As I write this it seems unlikely.

Ideas?

Bill
 
Hmm, that's an awkward one.

When you import a pool manually, it will automatically mount all filesystems. However, it acts a bit different if the pool in already imported and you reboot. The pool is found by the module during boot, but of course it can't mount filesystems at this point as it's too early in the boot process. At the point where local filesystems can be mounted, the ZFS rc script, which is controlled by
Code:
zfs_enable="YES"
runs a zfs mount -a to mount the filesystems.

This is why the obvious response to your question was to check rc.conf because it's a very common mistake, and users get caught out by the fact that it works one minute (when they first create the pool or access it manually), but all disappears when they reboot.

In your case it seems the system probably attempts to access the pool on boot (as it'll be in the zpool.cache file), but it'll be FAULTED because the disks are missing. When the disks become available the pool returns, but the mount -a never gets run.

Obviously the best solution would be to find out why FreeBSD crashes with it switched on. FreeBSD really shouldn't crash at all, regardless of how old the array is. On the other hand, if a pool is FAULTED on boot, and the disks are reconnected bringing it ONLINE, it should probably bring the filesystems online as well. If it doesn't it may be worth raising a PR and see what the devs think.
 
Interesting coincidence: today, we had a power outage at home. Nothing bad happened, the UPS kept the server up for ten minutes, and then cleanly shut down. Many hours later, the server booted correctly. All the internal disks (which include one ZFS file system) came up just perfect.

The funny thing is the external backup disk, which constitutes the second ZFS file system (its pool consists solely of that one external disk). It is in a small enclosure, connected via eSATA. What I didn't know: the enclosure doesn't power itself up when you apply power! The way I found out is looking at zpool status, which told me that the pool was unavailable. No problem, hit the power button, and by the time I had run back up the staircase, the pool was online again.

But: the corresponding file system didn't auto-mount. Should it? I think it would be nice if it did. If ZFS is smart enough to recognize the pools when the block device comes online, shouldn't it be smart enough to also mount it? Is this something settable?

For information: I obviously have
Code:
zfs_enable="YES"
in /etc/rc.conf, but the two ZFS file systems are not mentioned in /etc/fstab (nor do I think they should be). zpool status, zpool list and zfs list all look boring and normal.
 
Back
Top