FreeBSD 9 ZFS the easy way

JakkFrosted said:
How frequent of a problem is data corruption?

With multi-terabyte disks, corruption is inevitable. See this report from CERN, for example. Or this one from NEC. Or this more mainstream one. I could google more for you, but perhaps these will be enlightening.

JakkFrosted said:
I have never had to deal with it in my 7 years of server adminning. I host simple websites on my servers and a few other server programs here and there.

That you know of. When you deal with tens of thousands of disks and petabytes of data, this sort of data corruption is patently obvious. And that's with enterprise class drives, excellent environmentals and sophisticated monitoring. We probably shred more failed disks a month than you've used total in seven years.

That experience is why I use ZFS on any data I really care about. Because I can calculate how long it's going to take statistically for it to be unrecoverable.
 
I'm still testing the install on 9.2. The following line isn't required anymore in /boot/loader.conf for 9.2:
Code:
vfs.root.mountfrom="zfs:zroot"
zpool.cache also is not required anymore.

I don't think the following error message is a problem since we export and import back the zpool.
Code:
# zpool create -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache zroot /dev/gpt/disk0
cannot mount '/mnt/zroot': failed to create mountpoint
All those changes will simplify the install. I'm planning to write a new "How-to" for 9.2 once I finish testing. I will still continue testing because I have some inconsistency when I try to install with beadm and would like to have the new setup ready for it.
 
After testing I confirm that the install works without the use of zpool.cache and the use of
Code:
vfs.root.mountfrom="zfs:zroot"
in /boot/loader.conf.

I still need to find an elegant way of creating the zpool.
 
srivo said:
I still need to find an elegant way of creating the zpool!

What part of the creation you're referring to here? Creating a pool is quite straightforward unless the disks are those advanced 4 KB/sector disks.
 
In the install process when I exit to the shell to create my zpool I have trouble finding an elegant way to mount it on /mnt. I succeeded with trial and error and export/import. Sometimes it works, sometimes it fails.
 
You can set the altroot property on creation of the pool so the mountpoints get rooted under the same directory during the installation no matter what the mountpoint properties are set for the datasets. Something like this:

zpool create -R /mnt tank mirror ada0 ada1 ...

This would mount the tank pool initially under /mnt/tank. You would then have to create the additional datasets and set mountpoints so that root dataset gets mountpoint of / (so it appears directly under /mnt) and delete mountpoints for the top level datasets that you don't want to be mounted.

Remember the altroot property is not saved and is reset to empty on next import or boot.
 
The problem when I issue that command is that instead of mounting it on /mnt it mounts it on /mnt/tank? With an error message "can't mount..." I don't get it.
 
That's odd. Can you post the output of the mount command from the shell you're using to create the pool. The mounting under /mnt/tank initially is expected because of the altroot property and mountpoint of the first created tank dataset. I should probably write a HOWTO or something but I'm not using ZFS myself at the moment.
 
Uhm guys, not to be blunt here but did you both already forgot my previous message where I explained that the zpool behaviour was changed in FreeBSD 9.2? ;)

I even got a PR to prove it!
devilgrin.gif
.
 
Hmm I should probably set up a virtual machine with 9.2 to test this...

Edit: Yeah, I forgot. The root filesystem of the install environment is read-only and it makes it impossible to create directories under /mnt. I have to think about a way to first create the pool with altroot set to /tmp (that is on a read-write mdmfs(8)) and then switch things around so that the root dataset of the system to be installed ends up under /mnt.

Edit 2: Your PR is unfortunately invalid because it does not make a difference if zpool(8) tries to create the /tank or /mnt/tank directory on a read-only filesystem.
 
@ShelLuser, I saw your post but since I was getting an error when I was trying to create the zpool your way, I thought you made an error in your post.

No offence!
 
Last edited by a moderator:
The most elegant way I found so far is this way:
Code:
zpool create zroot /dev/gpt/disk0
zfs set mountpoint=none zroot
zfs create zroot/ROOT
zfs create -o mountpoint=/mnt zroot/ROOT/default
zpool set bootfs=zroot/ROOT/default zroot
This way the system is ready for beadm. Everything works but there is still one thing I don't like. When I type zfs list the output tells me that the mountpoint of zroot/ROOT/default is /mnt. Of course this is normal but I didn't find that clean.
 
srivo said:
ShelLuser, I saw your post but since I was getting an error when I was trying to create the zpool your way, I tough you made an error in your post!

No offence!
None taken what so ever, I am teasing a little bit up there (hence the smileys).

But I noticed that your example didn't seem to set the mountpoint. So, my command would be: # zpool create -m / -R /mnt zroot /dev/gpt/disk0. Notice the -m parameter, this sets the mount point to the root directory so that it won't try to create a new mountpoint for its own.

It could a bit confusing but -m sets the real mountpoint whereas -R sets the alternative mountpoint. zpool used to set the mountpoint automatically to root (/) but with the upcoming release it will set it automatically to /pool (where 'pool' is the name of the ZFS pool you're creating).

Hope this can help.
 
Because of an issue with zdb(8) I still prefer to use a cachefile. See FreeBSD 9.2 zpool.cache

For a 4K aligned pool, I do it this way:
Code:
# zpool history
History for 'kontos':
2013-09-11.23:07:49 zpool create -f -m none -o cachefile=/tmp/zpool.cache kontos \
                    mirror /dev/gpt/sys_1.nop /dev/gpt/sys_2.nop
2013-09-11.23:07:59 zpool export kontos
2013-09-11.23:08:46 zpool import -o altroot=/mnt -o cachefile=/tmp/zpool.cache -d /dev/gpt kontos
2013-09-11.23:09:25 zpool set bootfs=kontos kontos
2013-09-11.23:09:33 zfs set checksum=fletcher4 kontos
2013-09-11.23:09:33 zfs set atime=off kontos
2013-09-11.23:09:38 zfs set mountpoint=/ kontos
2013-09-11.23:12:14 zfs create kontos/usr
2013-09-11.23:12:15 zfs create kontos/usr/home
2013-09-11.23:12:15 zfs create kontos/var
 
Wait! @J65nko is right, we should avoid getting rid of zpool.cache for now. I just tried to create another boot environment with beadm and discover that beadm uses the zpool.cache file. If you have an install without the zpool.cache file just issue the following command to generate it: zpool set cachefile=/boot/zfs/zpool.cache zroot
 
Last edited by a moderator:
I still wonder if
Code:
zfs_enable="YES"
is required in /etc/rc.conf? I just installed without it and everything seems to work fine.
 
Well, the answer is: yes, we should still keep
Code:
zfs_enable="YES"
in our /etc/rc.conf. It starts the ZFS daemon that takes care of our ZFS drive!
 
Not quite. There are no user space daemons for ZFS on FreeBSD. What the setting in rc.conf(5) does is to set some jail related ZFS parameters and make sure /etc/zfs/exports exists in case some of the datasets are shared with NFS using the sharenfs property.
 
The rc script also does a zfs mount -a, which is fairly important :). It mounts all ZFS file systems (those that have canmount=yes at least). Without it, no ZFS file system will be mounted unless it's your root file system, or you've added it to /etc/fstab (which you shouldn't).
 
Hi all,

I have followed the tutorial and when I typed # zpool create -m / -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache zroot /dev/gpt/disk0 I get the following error:
Code:
mountpoint '/' exists and is not empty.
use '-m' option to provide a different default

Does anyone know why I get that message?

Thank you.
 
Hi @ShelLuser,

The version I was using was FreeBSD 9.2, I didn't check if /mnt existed and I deleted the VM. I'll have to double check next time. Whilst I got your attention: in the post you mentioned that you wanted to split /usr and /var.

Code:
zfs create zroot/usr
zfs create zroot/usr/home
zfs create zroot/var
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/usr/src
zfs set copies=1 zroot/usr/src
zfs create -o compression=lzjb -o setuid=off zroot/usr/ports
zfs set copies=1 zroot/usr/ports

Will my code above be the same as yours?
Code:
    (before the above '[I]zpool export zroot[/I]')

    mkdir /mnt/usr
    zfs create -o compression=lzjb -o setuid=off -o mountpoint=/usr/ports zroot/ports
    zfs create -o compression=off -o exec=off -o setuid=off zroot/ports/distfiles
    zfs create -o compression=off -o exec=off -o setuid=off zroot/ports/packages

    zfs create -o mountpoint=/usr/local zroot/local

    zfs create zroot/var
    zfs create -o exec=off -o setuid=off zroot/var/db
    zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
    zfs create -o compression=on -o exec=off -o setuid=off zroot/var/mail
    zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
    zfs create -o exec=off -o setuid=off zroot/var/run
    zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
    chmod 1777 /mnt/var/tmp

    zfs create zroot/home
    zfs create zroot/tmp
    chmod 1777 /mnt/tmp

Fred
 
Last edited by a moderator:
Back
Top