ZFS Moving drives to different bays

Hi,
I have a root on ZFS install with the following pool.
Code:
root@:~ # zpool status
  pool: zroot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            da2p2   ONLINE       0     0     0
            da3p2   ONLINE       0     0     0

errors: No known data errors
The drives are in bay 3 and bay 4 of the server but I want to move then to bay 1 and bay 2.
Can I just power off the server and move the drives? Drive in bay 3 to bay 1 and drive in bay 4 to bay 2?
 
ZFS might initially have a bit of a problem, it's possible your disk's nominations may change when you move them. But this shouldn't matter for ZFS, it's going to find the drives regardless. But it might need some hand holding the first time you start it with the disks moved.
 
My practical experience is that after a cable swap that changes the enumeration order of the drives, ZFS looks at what's on the disks, and, providing that nothing else has changed, has no problem understanding and using what's there.

This experience was with a tank, and not with a root pool (zoot).

I would therefore proceed with caution, boot single user, and examine the system.

Lucas and Jude recommend using GPT labels on each drive partition to indicate the hard drive’s physical location and serial number, as explained by Chris Cammack. It's a great idea, especially when it comes to replacing failed drives (where pulling the wrong drive can be calamitous).
 
That's the fun part about ZFS. We know that gpart can label partitions (hence the Lucas/Jude recommendation), but ZFS also puts it's own "metadata" about pools and vdevs and such on the device. If the device is recognized as bootable by the BIOS (Legacy or UEFI) it shouldn't matter where it's plugged in. The loader step that understands ZFS "tastes" the devices (that's the way it's described in GEOM documentation) to figure out where the pools are, then you get pushed over to the kernel and by the time you hit single user mode, all the devices that make up your root should be up and running. That's why it's important to update everything in a mirrored boot configuration.

(gpw928 really no difference to ZFS in your tank/zroot pools)

But caution is always good.
 
Note the globally unique identifiers (GUIDs):

Code:
root@mowa219-gjp4-8570p-freebsd:~ # zdb -C august

MOS Configuration:
        version: 5000
        name: 'august'
        state: 0
        txg: 1466991
        pool_guid: 1913339710710793892
        errata: 0
        hostname: ''
        com.delphix:has_per_vdev_zaps
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 1913339710710793892
            create_txg: 4
            children[0]:
                type: 'disk'
                id: 0
                guid: 18223107463529918695
                path: '/dev/ada0p3.eli'
                phys_path: 'id1,enc@n3061686369656d30/type@0/slot@1/elmdesc@Slot_00/p3/eli'
                whole_disk: 1
                metaslab_array: 256
                metaslab_shift: 33
                ashift: 12
                asize: 982745612288
                is_log: 0
                DTL: 951
                create_txg: 4
                com.delphix:vdev_zap_leaf: 129
                com.delphix:vdev_zap_top: 130
        features_for_read:
            com.delphix:hole_birth
            com.delphix:embedded_data
root@mowa219-gjp4-8570p-freebsd:~ # zfs --version
zfs-2.1.99-FreeBSD_g269b5dadc
zfs-kmod-2.1.99-FreeBSD_g269b5dadc
root@mowa219-gjp4-8570p-freebsd:~ # uname -KU
1400043 1400043
root@mowa219-gjp4-8570p-freebsd:~ #

The vdev labels and L2ARC header:

Code:
root@mowa219-gjp4-8570p-freebsd:~ # zdb -l /dev/ada0p3.eli
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'august'
    state: 0
    txg: 1466991
    pool_guid: 1913339710710793892
    errata: 0
    hostname: ''
    top_guid: 18223107463529918695
    guid: 18223107463529918695
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 18223107463529918695
        path: '/dev/ada0p3.eli'
        phys_path: 'id1,enc@n3061686369656d30/type@0/slot@1/elmdesc@Slot_00/p3/eli'
        whole_disk: 1
        metaslab_array: 256
        metaslab_shift: 33
        ashift: 12
        asize: 982745612288
        is_log: 0
        DTL: 951
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3
root@mowa219-gjp4-8570p-freebsd:~ #

– plus label space usage stats:

Code:
root@mowa219-gjp4-8570p-freebsd:~ # zdb -ll /dev/ada0p3.eli
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'august'
    state: 0
    txg: 1466991
    pool_guid: 1913339710710793892
    errata: 0
    hostname: ''
    top_guid: 18223107463529918695
    guid: 18223107463529918695
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 18223107463529918695
        path: '/dev/ada0p3.eli'
        phys_path: 'id1,enc@n3061686369656d30/type@0/slot@1/elmdesc@Slot_00/p3/eli'
        whole_disk: 1
        metaslab_array: 256
        metaslab_shift: 33
        ashift: 12
        asize: 982745612288
        is_log: 0
        DTL: 951
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3


ZFS Label NVList Config Stats:
  1124 bytes used, 113524 bytes free (using  1.0%)

   integers:   18    664 bytes (59.07%)
    strings:    5    244 bytes (21.71%)
   booleans:    2     92 bytes ( 8.19%)
    nvlists:    3    124 bytes (11.03%)


root@mowa219-gjp4-8570p-freebsd:~ #

<https://openzfs.github.io/openzfs-docs/man/8/zdb.8.html>
 
Back
Top