Just for anyone interested in a bit more information.
The labels on each disk contain information about devices in the same vdev, as well as information about their parent container. So in a simple three disk RAID-Z, each disk will contain information themselves and their two 'neighbors', as well as the pool itself (their parent is the pool). You can see this clearly by running a
zdb -l /dev/somezfsdisk
which will show the label off the disk. During updates, labels 1 and 3 are updated first (one at the beginning of the disk and one at the end), then labels 2 and 4 are updated to minimize the possibility of any problems making all the labels unusable.
Because a device only contains information about the disks in the same vdev, if you have a multiple vdev pool and lose all the disks in one vdev, ZFS can't work out the original pool layout. A simple test shows this:
Pool with two mirror vdevs:
Code:
# zpool status test
pool: test
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
md0 ONLINE 0 0 0
md1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
md2 ONLINE 0 0 0
md3 ONLINE 0 0 0
errors: No known data errors
The label off one device shows all the pool info, but only the info on itself and the other disk in its mirror (this actually outputs all four labels but they're all the same):
Code:
# zdb -l /dev/md0
--------------------------------------------
LABEL 0
--------------------------------------------
version: 28
name: 'test'
state: 1
txg: 9
pool_guid: 1748106379667507035
hostid: 3533697201
hostname: 'host'
top_guid: 7204611594387503165
guid: 15665966071343249231
vdev_children: 2
vdev_tree:
type: 'mirror'
id: 0
guid: 7204611594387503165
metaslab_array: 33
metaslab_shift: 20
ashift: 9
asize: 129499136
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 15665966071343249231
path: '/dev/md0'
phys_path: '/dev/md0'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 18376971707402152791
path: '/dev/md1'
phys_path: '/dev/md1'
whole_disk: 1
create_txg: 4
If you destroy one entire mirror ZFS can't figure out what's missing:
Code:
# zpool export test
# mdconfig -d -u 3
# mdconfig -d -u 2
# zpool import
pool: test
id: 1748106379667507035
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://illumos.org/msg/ZFS-8000-6X
config:
test UNAVAIL missing device
mirror-0 ONLINE
md0 ONLINE
md1 ONLINE
Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
I suspect it knows that there are other devices because of the
vdev_children value at the top of the tree, which in this case suggests that there should be two vdevs.