I noticed that the zdata pool didn't contain a cache but does have the ZIL. I wanted to add a cache mirror. I created a GPT partition called cache0 on ada0 and cache1 on ada1.
Let's find status of all ZFS pools:
[FONT=Courier New]
[/FONT]
Let's add cache to the zdata pool:
[FONT=Courier New]
[/FONT]
Uh oh, there appears to be an existing zfs pool by the name of cache. See if we can import it:
[FONT=Courier New]
[/FONT]
Oh, it looks like someone tried to create cache as a separate ZFS pool and is now corrupt. Let's try to remove it:
[FONT=Courier New]
[/FONT]
Let's find out what ZFS thinks is the disk structure:
[FONT=Courier New]
[/FONT]
How does one remove cache? How does one clean up a zfs pool when it doesn't have a corresponding gpart structure? It looks like /dev/ada0p[5-6] and /dev/ada1p[5-6] was destroyed some time ago prior to removing the ZFS pool thus corrupting the ZFS pool cache.
~Doug
Code:
root@backup:/ # gpart show -l
=> 34 5860533101 mfisyspd0 GPT (2.7T)
34 2014 - free - (1.0M)
2048 5860531080 1 data_disk0 (2.7T)
5860533128 7 - free - (3.5K)
<..snip..>
=> 34 5860533101 mfisyspd9 GPT (2.7T)
34 2014 - free - (1.0M)
2048 5860531080 1 data_disk9 (2.7T)
5860533128 7 - free - (3.5K)
=> 34 500118125 ada0 GPT (238G)
34 94 1 bootcode0 (47K)
128 2097152 2 swapada0 (1.0G)
2097280 165150720 3 disk0 (79G)
167248000 67108864 4 log0 (32G)
234356864 2097152 5 cache0 (1.0G)
236454016 263664143 - free - (126G)
=> 34 500118125 ada1 GPT (238G)
34 94 1 bootcode1 (47K)
128 2097152 2 swapada1 (1.0G)
2097280 165150720 3 disk1 (79G)
167248000 67108864 4 log1 (32G)
234356864 2097152 5 cache1 (1.0G)
236454016 263664143 - free - (126G)
Let's find status of all ZFS pools:
[FONT=Courier New]
Code:
root@backup:/ # zpool status
pool: zdata
state: ONLINE
scan: scrub repaired 0 in 15h32m with 0 errors on Wed Jul 6 23:12:39 2016
config:
NAME STATE READ WRITE CKSUM
zdata ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
gpt/data_disk0 ONLINE 0 0 0
gpt/data_disk1 ONLINE 0 0 0
gpt/data_disk2 ONLINE 0 0 0
gpt/data_disk3 ONLINE 0 0 0
gpt/data_disk4 ONLINE 0 0 0
gpt/data_disk5 ONLINE 0 0 0
gpt/data_disk6 ONLINE 0 0 0
gpt/data_disk7 ONLINE 0 0 0
gpt/data_disk8 ONLINE 0 0 0
gpt/data_disk9 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
gpt/log0 ONLINE 0 0 0
gpt/log1 ONLINE 0 0 0
errors: No known data errors
pool: zroot
state: ONLINE
scan: scrub repaired 0 in 0h2m with 0 errors on Wed Jul 6 07:12:24 2016
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/disk0 ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0
errors: No known data errors
Let's add cache to the zdata pool:
[FONT=Courier New]
Code:
root@backup:/ # zpool add zdata cache gpt/cache0 gpt/cache1
invalid vdev specification
use '-f' to override the following errors:
/dev/gpt/cache0 is part of potentially active pool 'cache'
/dev/gpt/cache1 is part of potentially active pool 'cache'
Uh oh, there appears to be an existing zfs pool by the name of cache. See if we can import it:
[FONT=Courier New]
Code:
root@backup:/ # zpool import
pool: cache
id: 1461940675727504241
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-5E
config:
cache UNAVAIL insufficient replicas
mirror-0 UNAVAIL insufficient replicas
11994887483163950582 UNAVAIL corrupted data
1506924040515699145 UNAVAIL corrupted data
Oh, it looks like someone tried to create cache as a separate ZFS pool and is now corrupt. Let's try to remove it:
[FONT=Courier New]
Code:
root@backup:/ # zpool export cache
cannot open 'cache': no such pool
root@backup:/ # zpool destroy -F cache
cannot open 'cache': no such pool
root@backup:/ #zpool clear cache
cannot open 'cache': no such pool
Let's find out what ZFS thinks is the disk structure:
[FONT=Courier New]
Code:
root@backup:/ # zdb -C
zdata:
version: 5000
name: 'zdata'
state: 0
txg: 14915347
pool_guid: 939299116
hostid: 1679191605
hostname: 'backup.dawnsign.com'
vdev_children: 2
vdev_tree:
type: 'root'
id: 0
guid: 939299116
create_txg: 4
children[0]:
type: 'raidz'
id: 0
guid: 647953515
nparity: 3
metaslab_array: 33
metaslab_shift: 38
ashift: 9
asize: 30005869936640
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 265152668
path: '/dev/gpt/data_disk0'
phys_path: '/dev/gpt/data_disk0'
whole_disk: 1
DTL: 356
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 392323551
path: '/dev/gpt/data_disk1'
phys_path: '/dev/gpt/data_disk1'
whole_disk: 1
DTL: 355
create_txg: 4
<..snip..>
children[8]:
type: 'disk'
id: 8
guid: 1908467374
path: '/dev/gpt/data_disk8'
phys_path: '/dev/gpt/data_disk8'
whole_disk: 1
DTL: 348
create_txg: 4
children[9]:
type: 'disk'
id: 9
guid: 795403226
path: '/dev/gpt/data_disk9'
phys_path: '/dev/gpt/data_disk9'
whole_disk: 1
DTL: 347
create_txg: 4
children[1]:
type: 'mirror'
id: 1
guid: 15434186492787759721
metaslab_array: 164
metaslab_shift: 28
ashift: 9
asize: 34355019776
is_log: 1
create_txg: 14915345
children[0]:
type: 'disk'
id: 0
guid: 11399782678873612021
path: '/dev/gpt/log0'
phys_path: '/dev/gpt/log0'
whole_disk: 1
create_txg: 14915345
children[1]:
type: 'disk'
id: 1
guid: 3690602554018923392
path: '/dev/gpt/log1'
phys_path: '/dev/gpt/log1'
whole_disk: 1
create_txg: 14915345
features_for_read:
com.delphix:hole_birth
zroot:
version: 5000
name: 'zroot'
state: 0
txg: 15003274
pool_guid: 500578219
hostid: 1679191605
hostname: ''
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 500578219
children[0]:
type: 'mirror'
id: 0
guid: 1524681434
metaslab_array: 33
metaslab_shift: 29
ashift: 12
asize: 84552450048
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 1545985234
path: '/dev/gpt/disk0'
phys_path: '/dev/gpt/disk0'
whole_disk: 1
DTL: 403
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 969182785
path: '/dev/gpt/disk1'
phys_path: '/dev/gpt/disk1'
whole_disk: 1
DTL: 402
create_txg: 4
features_for_read:
com.delphix:hole_birth
root@backup:/ #
Code:
How does one remove cache? How does one clean up a zfs pool when it doesn't have a corresponding gpart structure? It looks like /dev/ada0p[5-6] and /dev/ada1p[5-6] was destroyed some time ago prior to removing the ZFS pool thus corrupting the ZFS pool cache.
~Doug