ZFS - mountpoint dissapearing after reboot

Let me start off my saying I'm a FreeBSD / NFS / ZFS n00b, but am glad that I'm finally diving into this - my prior experience is more CentOS based. So it may very well be that, in all my google'ing in the last few days, I missed something very basic that's causing this problem.

I have a 3-way mirror zpool called tank, and just one filesystem called storage under that (tank/storage). I have a very simple exports file at the moment:

Code:
/tank/storage -network 10.16.23.0/24

I'm using CentOS 6.3 as a client. When I mount the NFS share it only comes up as read-only (a different problem I'll try to figure out myself later), but it's there at least (FreeBSD server is 10.16.23.110):

df -h (on client, after mount)
Code:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_shadow-lv_root
                       50G  4.9G   42G  11% /
tmpfs                 1.8G  1.8M  1.8G   1% /dev/shm
/dev/sda1             485M   64M  396M  14% /boot
/dev/mapper/vg_shadow-lv_home
                      534G  425G   83G  84% /home
10.16.23.110:/tank/storage
                      914G     0  914G   0% /home/justin/tank-share

To test things out I created a 1GB file on the server using dd(1), and when I df -h again on the client it correctly displays the disk usage, and I can see the file from the client side:

Code:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_shadow-lv_root
                       50G  4.9G   42G  11% /
tmpfs                 1.8G  1.8M  1.8G   1% /dev/shm
/dev/sda1             485M   64M  396M  14% /boot
/dev/mapper/vg_shadow-lv_home
                      534G  425G   83G  84% /home
10.16.23.110:/tank/storage
                      914G  1.1G  913G   1% /home/justin/tank-share

When I restart the server though, things start getting weird. df -h after server reboot and re-mounted:

Code:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_shadow-lv_root
                       50G  4.9G   42G  11% /
tmpfs                 1.8G  1.8M  1.8G   1% /dev/shm
/dev/sda1             485M   64M  396M  14% /boot
/dev/mapper/vg_shadow-lv_home
                      534G  425G   83G  84% /home
10.16.23.110:/tank/storage
                      285G  2.4G  260G   1% /home/justin/tank-share

On the server side, when I ls /tank/storage, it no longer shows the 1G file I created (zfs pool tank shows the same 1G of space is being used though). Not sure what the correct terminology would be, but it seems that once it reboots ZFS never "remounts" the zpool to the root filesystem? Been searching for a few days and I'm really hoping that I'm just missing something simple here.
 
Decided to do a quick sanity check - I destroyed the zpool, reinstalled FreeBSD and re-created the zpool, getting the exact same issue. When I recreated it (tank/storage) I made a file in it (dd if=/dev/zero of=/tank/storage/testing bs=1M count 100), confirmed via zpool list that the file was written to the zpool. Restarted the server and... no more file under /tank/storage, and anything else I make there now is saved to the OS disk instead of to the zpool... not sure what I'm missing.

Also, almost forgot the obligatory uname -a:

Code:
FreeBSD tank.deathstar.com 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec  4 09:23:10 UTC 2012     root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64
 
*sigh*

Well, I just feel silly. Turns out my /etc/rc.conf was missing zfs_enable="YES" - fixed that and now all is well.

Please excuse me while I go fashion myself a dunce-cap ;)
 
Thanks! Read your post just in time. I get it FreeBSD doesn't supply helpful texts like Windows etc, but I find it weird that ZFS does not just work at all without that line in rc.conf. Would be more clear.

Oh well, remembering stuff like this will make an admin look like a guru when s/he helps a poor soul who's all stuck like this :).
 
Back
Top