There is a lot of advice on replacing ZFS volumes but the details are just slightly at variance from my config in a way that has given me pause. I note the admonishments that mirror and RAIDZ processes are different and before I blow this system away and have to start over by doing something stupid, I wanted to check with the collective wisdom as to best practices.
For example, it appears that these instructions are for mirrors are are not fully appropriate to RAIDZ arrays, as are these from Oracle.
The advice from 19.3.6. Dealing with Failed Devices seems good for the zroot array, but I'm worried might complicate or fail the rebuild of the bootpool array.
The physical drive was replaced already - it is on a RAID controller (hardware RAID) but is presented as JBOD. I've reformatted the disk at BIOS without any errors, initialized, and then created a single disk "array" that is presented to the OS (this is how the rest are configured).
I'm not completely clear on whether a command like 19.3.6's
is sufficient or if I need to manually format the disk the use
And the replacement drive (aacd5) is at
For example, it appears that these instructions are for mirrors are are not fully appropriate to RAIDZ arrays, as are these from Oracle.
The advice from 19.3.6. Dealing with Failed Devices seems good for the zroot array, but I'm worried might complicate or fail the rebuild of the bootpool array.
The physical drive was replaced already - it is on a RAID controller (hardware RAID) but is presented as JBOD. I've reformatted the disk at BIOS without any errors, initialized, and then created a single disk "array" that is presented to the OS (this is how the rest are configured).
I'm not completely clear on whether a command like 19.3.6's
zpool replace mypool 13374215198732904044 aacd5p4.eli
would work (but... zpool replace mypool 1337421519873290404 boot5
)is sufficient or if I need to manually format the disk the use
zpool attach
and install the boot blocks.
Code:
# zpool status
pool: bootpool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: resilvered 368K in 0h0m with 0 errors on Fri Mar 3 20:50:35 2017
config:
NAME STATE READ WRITE CKSUM
bootpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
gpt/boot0 ONLINE 0 0 0
gpt/boot1 ONLINE 0 0 0
gpt/boot2 ONLINE 0 0 0
gpt/boot3 ONLINE 0 0 0
gpt/boot4 ONLINE 0 0 0
13374215198732904044 UNAVAIL 0 0 0 was /dev/gpt/boot5
gpt/boot6 ONLINE 0 0 0
gpt/boot7 ONLINE 0 0 0
errors: No known data errors
pool: zroot
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: resilvered 784K in 0h0m with 0 errors on Mon Mar 5 12:56:11 2018
config:
NAME STATE READ WRITE CKSUM
zroot DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
aacd0p4.eli ONLINE 0 0 0
aacd1p4.eli ONLINE 0 0 0
aacd2p4.eli ONLINE 0 0 0
aacd3p4.eli ONLINE 0 0 0
aacd4p4.eli ONLINE 0 0 0
9632703966287330955 UNAVAIL 0 0 0 was /dev/aacd5p4.eli
aacd5p4.eli ONLINE 0 0 0
aacd6p4.eli ONLINE 0 0 0
errors: No known data errors
Code:
# gpart show
=> 34 143155133 aacd0 GPT (68G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-zfs (2.0G)
4195362 4194304 3 freebsd-swap (2.0G)
8389666 134765501 4 freebsd-zfs (64G)
=> 34 143155133 aacd1 GPT (68G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-zfs (2.0G)
4195362 4194304 3 freebsd-swap (2.0G)
8389666 134765501 4 freebsd-zfs (64G)
=> 34 143155133 aacd2 GPT (68G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-zfs (2.0G)
4195362 4194304 3 freebsd-swap (2.0G)
8389666 134765501 4 freebsd-zfs (64G)
=> 34 143155133 aacd3 GPT (68G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-zfs (2.0G)
4195362 4194304 3 freebsd-swap (2.0G)
8389666 134765501 4 freebsd-zfs (64G)
=> 34 143155133 aacd4 GPT (68G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-zfs (2.0G)
4195362 4194304 3 freebsd-swap (2.0G)
8389666 134765501 4 freebsd-zfs (64G)
=> 34 143155133 aacd5 GPT (68G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-zfs (2.0G)
4195362 4194304 3 freebsd-swap (2.0G)
8389666 134765501 4 freebsd-zfs (64G)
=> 34 143155133 aacd6 GPT (68G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-zfs (2.0G)
4195362 4194304 3 freebsd-swap (2.0G)
8389666 134765501 4 freebsd-zfs (64G)
And the replacement drive (aacd5) is at
Code:
# egrep 'da[0-9]|cd[0-9]' /var/run/dmesg.boot
aacd0 on aac0
aacd0: 69900MB (143155200 sectors)
aacd1 on aac0
aacd1: 69900MB (143155200 sectors)
aacd2 on aac0
aacd2: 69900MB (143155200 sectors)
aacd3 on aac0
aacd3: 69900MB (143155200 sectors)
aacd4 on aac0
aacd4: 69900MB (143155200 sectors)
aacd5 on aac0
aacd5: 69900MB (143155200 sectors)
aacd6 on aac0
aacd6: 69900MB (143155200 sectors)
aacd7 on aac0
aacd7: 69900MB (143155200 sectors)