Booting from ZFS no longer requires zpool.cache in 9-STABLE

Andriy Gapon just commited to 9-STABLE the last parts needed to boot from a ZFS pool without a zpool.cache file. The change makes the kernel probe the disks using GEOM instead of relying on the cached zpool.cache.

Read this before trying out, in some cases there might be problems from leftover zfs labels on re-used disks:

http://lists.freebsd.org/pipermail/freebsd-stable/2012-November/070883.html

My system is at SVN version r244633, update to at least that to get the changes.
 
You can call it either way. It's called 9-STABLE because it's the development branch in the SVN repository at stable/9. At the same time it's called 9.1-STABLE because it's the development for the next release after 9.1.

Anyway, you have to be tracking stable/9 branch of the SVN repository to get these changes.
 
kpa said:
Anyway, you have to be tracking stable/9 branch of the SVN repository to get these changes.

Will do shortly in one of my own servers. Actually after the X.1-RELEASE my desktop usually goes to CURRENT.
 
I also noticed that there's no need to have a vfs.root.mountfrom line in loader.conf(5) anymore as long as you have bootfs property set on the pool. All of this makes creating a bootable ZFS installation a lot easier.
 
kpa said:
I also noticed that there's no need to have a vfs.root.mountfrom line in loader.conf(5) anymore as long as you have bootfs property set on the pool. All of this makes creating a bootable ZFS installation a lot easier.

Very much indeed. Also, switching BE would actually be much more easy.
 
There are some utilities that need the zpool.cache even if the kernel no longer requires it. One of them is zdb(8). However, the manual page of it warns that it should not be used on a live pool meaning you should boot from a recovery media and import the pool so that a temporary zpool.cache is created if you want to use zdb(8) on the root pool.
 
I just hope this doesn't make the problem of labels disappearing in zpools even worse. I'm pretty sure I'm not the only person who wants a simple way to fix disk device names without having a half-dozen dev entries all pointing to the same device. It's a bit of a pain when you carefully label every disk only to find [a]da entries randomly starting to appear in the pool.
 
I know this change doesn't affect labels directly but there are many examples on the forums where users have found ZFS has started using the direct devices rather than their labels. The main reason it didn't do this more often is that the cache file includes a list of the devices that make up the pool. This is what the cache file is for really, it saves ZFS having to scan all the disks. If ZFS now scans the disks rather than use the cache file, it's bound to find the ada/da devices first.

For example, this will make it much easier to install to ZFS as you don't have to do all the messing around to get the correct cache file in the right place. But if you label all your disks, build the pool then install without the cache file, there's a good chance it'll boot up without the labels.
 
In my case it was using the glabel(8) labels I had on the disks even after deleting the zpool.cache. Is it really such a big deal? Importing the pool will succeed no matter what the devices are called because the devices are detected solely by the on-disk metadata. I do understand that it can be a bit problematic to identify a single disk out of a big array of disk for replacement but even that can be done without labels.
 
kpa said:
In my case it was using the glabel(8) labels I had on the disks even after deleting the zpool.cache. Is it really such a big deal? Importing the pool will succeed no matter what the devices are called because the devices are detected solely by the on-disk metadata. I do understand that it can be a bit problematic to identify a single disk out of a big array of disk for replacement but even that can be done without labels.

Yes, that is a big deal. Once a disk dies/goes offline, you wouldn't have any way of finding the labels of the disk. AFAIK, you'd then have to start the process of elimination; Find the labels of every other disk. This is time consuming, which is bad.
 
Back
Top