Setting a ZFS mountpoint in my /tank results in an empty directory!

I'm exploring the wonderful world of ZFS, but I have yet to get it working. My current setup is a Sony VAIO VGN-SZ430N laptop with four slices: A 6.37G Windows recovery slice (ad4s1), a 27.7G Windows Vista slice (ad4s2), a 20G FreeBSD slice with all of my UFS partitions (ad4s3), and a 94G unformatted slice (but I did set the type as "165: freebsd" in cfdisk during install) (ad4s4). I ran the following command to create a zpool:

# zpool create tank ad4s4

This left me with a /tank, hopefully being my fourth slice formatted as zfs. I then created some datasets:

# zfs create tank/usr
# zfs create tank/usr/home

At this point I wanted to try zfs out by sticking my home directory on it (I just installed and got my system up to date, so I don't have personal information to lose)

# zfs set mountpoint=/usr/home tank/usr/home

That seemed to be successful, but then I tried to cd into /usr/home, only to find there was nothing there. I have one normal user "agi", and that directory was missing. Fortunately, all I had to do was run:

# zfs set mountpoint=none tank/usr/home

and my home directory was safe and sound on my UFS partition (I could successfully cd to /home/agi or /usr/home/agi and find the contents).

But what happened? Why was there nothing there even after setting the mountpoint?

One thing I found from the zfs man page was to try running

# zfs mount -a

after setting all the mountpoints, but that did nothing :( . I've tried this with /usr/ports and /usr/ports/distfiles as well, and neither of those worked.

Just recently, I saw a reference in the ZFS Administration Guide and an online guide to setting the mountpoint to /export/home instead of /home. What exactly would that do? I'm not on my FreeBSD laptop right now, so I can't test it. Is this the solution I'm looking for‽


Thanks in advance. I hope I didn't screw this up from the start with my slicing (I used the advice I got from this thread: http://forums.freebsd.org/showthread.php?t=10237 )
 
Honestly? That was the problem? Lol it's so obvious no wonder I didn't see it in any documentation!

Alright, now I need to find the best options for the rsync command to copy my data. rsync -rv looks pretty good from the man page, but are there more options I should pass to get the file attributes or anything else right?
 
yah, the best way to do this would be to create them, leave them mounted on /tank/ then use piped tar's to copy eveyrthing over (or cpio)

once you have it all copied over, then change the mount points to what they need to be.
 
It looks like the best command for the job is
# rsync -av <source> <destination>
I'm doing that right now and it's working. Let's see if it all mounts and works correctly in the end.
 
Agi93 said:
It looks like the best command for the job is
# rsync -av <source> <destination>
I'm doing that right now and it's working. Let's see if it all mounts and works correctly in the end.

Installing extra sw for this is overkill. In your case it should be as simple as
Code:
cp -pR /src /dst
 
I already ran rsync -av and zfs won't let me unmount var or tmp because they're apparently in use, even when I take off noauto from the ufs partitions. Is it harmful that I didn't include the H and x options for rsync? Also, according to the rsync man page, the x option does not allow the data to transcend filesystems, and I wanted to transfer my data from ufs to zfs.
 
Use sysutils/lsof or fstat(8) to find out who is still sitting in the partition you're trying to umount. If you know what you are doing, use # umount -f. Moving "system" partitions like /usr and /var should be done in single user mode or better using fixit.

# rsync -H preserves hard links. If you do not preserve them you'll waste space on the destination and maybe break integrity (depending on the actual use of hardlinks). This is also the reason why # cp -pR is a no go (there are some other reasons, too).

# rsync -x (--one-file-system) does not mean source and destination have to be on the same filesystem. It means the recursive search does never leave the filesystem where it started.

There's a huge difference between these two:

# rsync -aHxv / /usr /backup/root
# rsync -aHv / /usr /backup/root

The second command will fail so hard. :)
 
Yes! I was able to properly unmount my zfs datasets, destroy the pool, recreate it and all of my datasets, and I just ran this long rsync -aHxv command to copy my data from each desired ufs directory to its corresponding one in /tank. It's still running now, so hopefully this will work correctly. I'll post back if I have any problems or just want to let you know it worked.

Thank you!
 
Agi93 said:
Honestly? That was the problem? Lol it's so obvious no wonder I didn't see it in any documentation!

Alright, now I need to find the best options for the rsync command to copy my data. rsync -rv looks pretty good from the man page, but are there more options I should pass to get the file attributes or anything else right?

# rsync --verbose --stats --archive --hard-links --numeric-ids --inplace /path/to/source/ /path/to/dest/

--verbose shows all the files being copied

--stats shows you stats at the end on what was copied

--archive copies ownership and permissions correctly

--hard-links copies hard/soft links correctly, instead of making multiple copies of things (*VERY* important on FreeBSD systems if copying any bin directories)

--numeric-ids copies ownership using UID/GID instead of doing name lookups in /etc/passwd and /etc/group all the time (not really needed on local copies, but when copying between systems, it's very important)

--inplace isn't needed on an initial copy into a blank ZFS filesystem, but it will make future copies into ZFS much faster, as it plays better with the Copy-on-Write semantics of ZFS
 
Back
Top