Solved installing to zfs root, how create pool using gptid

The bsdinstall script works perfectly for me using Root-on-ZFS Automatic Partioning but the resultant pool uses default device names such as /dev/ada2. I have a target system where I'm installing onto the second ssd but anticipate that in the future the first ssd (which is in a hot swap bay) may be removed. The pool would stop working when the ada2 became ada1.

I know that I could avoid future problems by using gptid device naming. If I were issuing the zpool command myself it would be
Code:
zpool create -d /dev/gptid zroot
or if it were other than the boot drive I could at a later time export the pool and then re-import it using the gptid device naming:
Code:
zpool export zroot
zpool import -d /dev/gptid zroot
I can't use gptid device naming in the first place because the zfs-root guided partitioning doesn't support that (?), and I can't revise it later (can I?) because I can't unmount (export) the root partition. Is there a way around this dilemma that I (relative newbie) would understand?
 
ZFS doesn't care what the names are are only care what the GUID of the pool is. The name is for us users. If all drives have boot code written to them you should be able to shuffle them around at will.

Some reading on it:
http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSGuids

It seems that my fear is unfounded and all will be well. I had used zdb -C to dump zpool configuration data for a couple of pools that I use, and had tried to understand how zfs keeps track of what host owns a pool and what disks are part of it. It is quite a good data structure -- see the note at the link provided by junovitch.

A word of caution to users of Linux: Linux may behave differently. I created a 2-disk mirror pool under linux using ZFS On Linux version 0.6.3. Following this guide the pool was created with
Code:
zpool create -o ashift=12 data /dev/sdb /dev/sdc
. When I later moved the drive at sdb to another port, /dev/sdd, the pool could not be mounted or imported. It was faulted. I did resolve the problem by temporarily moving the disk back to sdb, exporting and immediately re-importing it using zfs import -d /dev/disk/by-partuuid. The drive could then be moved successfully. From that experience it had seemed that the pool had to be created with uuids in the first place in order to have relocatable drives; indeed it may be that way under Linux.

For now I am assuming that FreeBSD handles zfs calls into the kernel using uuids no matter how the pool was created and that disks can always be moved between physical ports. To quote from junovitch's reference:

Although ZFS commands like 'zpool status' will generally not tell you this, ZFS does not actually identify pool devices by their device name. Well, mostly it doesn't. The real situation is somewhat complex. ZFS likes identifying pool-related objects with what it calls a 'GUID', a random and theoretically unique 64-bit identifier. Pools have GUIDs, vdevs have GUIDs, and, specifically, disks (in pool configurations) have GUIDs. ZFS internally uses the GUID for most operations; for instance, almost all of the kernel interfaces that zpool uses actually take GUIDs to identify what to change, instead of device names.​
 
Before I posted I did shuffle a bunch of disks around to different SATA ports in a VirtualBox I was using for some ZFS testing. FreeBSD didn't think twice about it and booted the pool just fine. Beforehand some of the disks used the ada# notation and afterwards it was much uglier as diskid/DISK-VB076d0fd5-5403939fp3.

I looked at another VirtualBox image I had for some ZFS testing and that one used the GPT labels as gpt/zfs0 and gpt/zfs1 from a fresh install and not the bare ada0p3 and ada1p3 disks.

In case your wondering, I used to have UFS on an SSD on my home server and 3 drive RAIDZ. The SSD died and I didn't want to lose any data so I tested out converting an existing system in the VirtualBox systems before doing it for real. There are tons of manual Root on ZFS guides out there and once the layout is all in place it's just a matter of using tar(1) to un-package the FreeBSD tarballs.
 
Back
Top