ZFS pool not using GPT labels after forcing 4k sector size with gnop

Hi!

I'm prototyping a new file server setup, and have to deal with drives which have physical sectors of 4k, but report 512B sectors. As this server will have 15 HDDs for its storage pools, I use labels indicating which drive bay they are located at.

I've used google-fu, and found a way to force ZFS to recognize 4k sector size using gnop. The problem here is that when I export/import the pool (or restart the machine), the pool will reference the drive partitions directly, and not their GPT labels.

Is there any way to force ZFS to refer to a drives GPT labels in this situation?

Verbose details
I've applied my steps to a single-drive pool for the purpose of this post, and the commands are as follows:
# gpart add -a 1m -t freebsd-zfs -l Bay1.1 ada1
# gnop create -S 4k gpt/Bay1.1
# zpool create tank gpt/Bay1.1.nop

That creates the following pool:
# zpool status
Code:
  pool: tank
 state: ONLINE
 scan: none requested
config:

        NAME              STATE     READ WRITE CKSUM
        tank              ONLINE       0     0     0
          gpt/Bay1.1.nop  ONLINE       0     0     0

errors: No known data errors

I then proceed to export the tank, delete the gnop device, and import the tank:
# zpool export tank
# gnop destroy gpt/Bay1.1.nop
# zpool import tank

And the pool looks as such:

# zpool status
Code:
  pool: tank
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          ada1p1    ONLINE       0     0     0

errors: No known data errors
 
This is a known problem. On import zpool(8) defaults to searching /dev for devices and ignores /dev/gpt if the devices that make up the pool are found in /dev. This is the recommended workaround:

# zpool import -d /dev/gpt poolname
 
kpa said:
This is a known problem. On import zpool(8) defaults to searching /dev for devices and ignores /dev/gpt if the devices that make up the pool are found in /dev. This is the recommended workaround:

# zpool import -d /dev/gpt poolname

Thank you! That solved the immediate problem. Data drives and ZIL device is recognized by GPT labels now.

When applying this to the prototype pool, the cache device is still not referred to by GPT label. Further investigation show there's no entry for the cache in zdb output; why?

# zpool status
Code:
  pool: testpool
 state: ONLINE
 scan: none requested
config:

        NAME                STATE     READ WRITE CKSUM
        testpool            ONLINE       0     0     0
          raidz1-0          ONLINE       0     0     0
            gpt/Bay1.1      ONLINE       0     0     0
            gpt/Bay1.2      ONLINE       0     0     0
            gpt/Bay1.3      ONLINE       0     0     0
        logs
          gpt/zfslog        ONLINE       0     0     0
        cache
          ada4s5            ONLINE       0     0     0
 
Back
Top