ZFS zpool in DEGRADED state

HI,

After an hardware failure (motherboard replacement) the server was reassembled. for the record I didn't pay any attention to disk order in the disk shelfs when placing the 4 drives back.
I have noticed that my raidz pool is in degraded state.
This pool was build with 4 drives not 5 and it seems like something to do with the order of the drive that might have changed.
Any idea how to fix that? camcontrol identify all the devices with no issue.


Screenshot 2023-12-14 at 7.23.21.png


Thanks
 
The order or "name" (i.e. visual representation the OS currently uses) of the drives is irrelevant, ZFS uses the on-disk metadata to assemble a pool.
You can always change the visual representation ("name") of the disks in a system by enabling/disabling the various kern.geom.label sysctls. E.g. setting .gptid.enable and .disk_ident.enable to 0 will show drives by their GPT label in zpool status. But again: this is completely irrelevant for ZFS when it assembles a pool at import.

Regarding your pool issues:
RaidZ vdevs should *always* be set up with a multiple of two number of disks + the number of parity disks, i.e. raidz1 will always have an uneven number of disks (2n+1). Are you sure that pool wasn't created with 5 disks (4+1)? zpool history maxpool | grep create will show you the command the pool was created with.

Can you give the output of zdb -C maxpool and zdb -h maxpool | grep -vE 'destroy|snapshot|send'? Especially the output of the second command might be quite long, but you can directly pipe the output to termbin.com and share the link ( [...] | nc termbin.com 9999)


edit:
zdb -h maxpool | grep 'zpool' should also be sufficient as a starting point
 
Apparently the zpool history showed that the pool was created with 5 drives(was long ago, didn't remember).

Bash:
zpool create -f -o ashift=12 maxpool raidz f6c86638-cb24-4753-b5be-39981ebd5dab dcb008fb-6ecd-49fd-9980-14e81a9920c6 d2b4099d-4801-4b9d-96f4-15f6bb9f2ca6 661c4e8b-3991-4297-ba05-95c5c9532692 1b7ac158-6b25-4b98-9d20-5e313b783dde

Forgot the 5th drive on the temporary server when replacing the MB :(

Screenshot 2023-12-14 at 11.01.35.png

Thank you for the quick help
 
  • Thanks
Reactions: sko
RaidZ vdevs should *always* be set up with a multiple of two number of disks + the number of parity disks, i.e. raidz1 will always have an uneven number of disks (2n+1).
I realise that is the conventional wisdom, but I was under the impression that compression interfered with the ideal (2n+1 for RAIDZ1) striping, and for that reason any number of disks was considered OK for any RAIDZ.
 
I realise that is the conventional wisdom, but I was under the impression that compression interfered with the ideal (2n+1 for RAIDZ1) striping, and for that reason any number of disks was considered OK for any RAIDZ.
for (mostly) incompressible data you will still loose a lot of space to padding. With an increasing number of drives this factor will get smaller, but especially for very small pools raidZ is usually not very efficient and especially if only a single vdev is used also quite slow - on spinning rust even worse than on flash. Resilvering of such pools/vdevs can easily take multiple days. IMHO raidZ on small pools (i.e. anything <10 drives) should *only* be used if you absolutely can't use mirrors for some reason (e.g. physical space limitations). mirrors are always more flexible and provide much better "allround-performance", especially if you want to run e.g. VMs or databases off that pool.
 
Back
Top