I have a problem when physically changing a ZFS pool disk from an ada to a da device.
ZFS doesn't detect the disk any more.
The pool was created like this
disk11 and disk12 created the same way
If I then export the pool and check zpool import everything looks fine
If I then move disk11 to a da device instead
the disk does not show up when checking for GPTs
Did I screw something up when creating this?
I have some disks that are not created with gpart but just have a glable. These are not a problem to move around between ada and da devices.
ZFS doesn't detect the disk any more.
The pool was created like this
$ gpart create -s GPT ada1
ada1 created
$ gpart show ada1
=> 34 7814037101 ada1 GPT (3.7T)
34 7814037101 - free - (3.7T)
$ gpart add -b 2048 -s 7813627501 -t freebsd-zfs -l disk10 ada1
ada1p1 added
$ gpart show ada1
=> 34 7814037101 ada1 GPT (3.7T)
34 2014 - free - (1M)
2048 7813627501 1 freebsd-zfs (3.7T)
7813629549 407586 - free - (199M)
disk11 and disk12 created the same way
$ zpool create tank2 raidz gpt/disk10 gpt/disk11 gpt/disk12
If I then export the pool and check zpool import everything looks fine
$ zpool import
pool: tank2
id: 5667179852108998157
state: ONLINE
status: One or more devices were configured to use a non-native block size.
Expect reduced performance.
action: The pool can be imported using its name or numeric identifier.
config:
tank2 ONLINE
raidz1-0 ONLINE
gpt/disk10 ONLINE
gpt/disk11 ONLINE
gpt/disk12 ONLINE
$ gpart show -l
=> 34 7814037101 ada1 GPT (3.6T)
34 2014 - free - (1.0M)
2048 7813627501 1 disk10 (3.6T)
7813629549 407586 - free - (199M)
=> 34 7814037101 ada2 GPT (3.6T)
34 2014 - free - (1.0M)
2048 7813627501 1 disk11 (3.6T)
7813629549 407586 - free - (199M)
=> 34 7814037101 ada3 GPT (3.6T)
34 2014 - free - (1.0M)
2048 7813627501 1 disk12 (3.6T)
7813629549 407586 - free - (199M)
=> 34 7814037101 diskid/DISK-WD-WMC1F0784422 GPT (3.6T)
34 2014 - free - (1.0M)
2048 7813627501 1 disk10 (3.6T)
7813629549 407586 - free - (199M)
=> 34 7814037101 diskid/DISK-WD-WMC1F0712526 GPT (3.6T)
34 2014 - free - (1.0M)
2048 7813627501 1 disk11 (3.6T)
7813629549 407586 - free - (199M)
=> 34 7814037101 diskid/DISK-WD-WMC1F0634396 GPT (3.6T)
34 2014 - free - (1.0M)
2048 7813627501 1 disk12 (3.6T)
7813629549 407586 - free - (199M)
If I then move disk11 to a da device instead
$ zpool import
pool: tank2
id: 5667179852108998157
state: DEGRADED
status: One or more devices are missing from the system.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
see: http://illumos.org/msg/ZFS-8000-2Q
config:
tank2 DEGRADED
raidz1-0 DEGRADED
gpt/disk10 ONLINE
16291655659778260829 UNAVAIL cannot open
gpt/disk12 ONLINE
the disk does not show up when checking for GPTs
$ gpart show -l
=> 34 7814037101 ada1 GPT (3.6T)
34 2014 - free - (1.0M)
2048 7813627501 1 disk10 (3.6T)
7813629549 407586 - free - (199M)
=> 34 7814037101 ada3 GPT (3.6T)
34 2014 - free - (1.0M)
2048 7813627501 1 disk12 (3.6T)
7813629549 407586 - free - (199M)
=> 34 7814037101 diskid/DISK-WD-WMC1F0784422 GPT (3.6T)
34 2014 - free - (1.0M)
2048 7813627501 1 disk10 (3.6T)
7813629549 407586 - free - (199M)
=> 34 7814037101 diskid/DISK-WD-WMC1F0634396 GPT (3.6T)
34 2014 - free - (1.0M)
2048 7813627501 1 disk12 (3.6T)
7813629549 407586 - free - (199M)
Did I screw something up when creating this?
I have some disks that are not created with gpart but just have a glable. These are not a problem to move around between ada and da devices.