ZFS Can't import pool after hard drive failure

In desperate need of help please! Can someone please point me in the right direction? I cannot import my pool after a hard drive failure. I replaced the fail drive and waited for it rebuild hoping that would solve the problem. But it wasn't. I am new at this, so anything advice would be greatly appreciated. Thank you!

Code:
 pool: Bigarray
     id: 16818539054092279979
  state: ONLINE
 status: Some supported features are not enabled on the pool.
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        Bigarray                                      ONLINE
          gptid/0d877c5a-1fbe-11e7-8ed1-a0369fbd6614  ONLINE



cannot import 'Bigarray': one or more devices is currently unavailable
This is the message I get when trying to import the pool.
Code:
Solaris: WARNING: can't open objset 49, error 122
Solaris: WARNING: can't open objset 73, error 122
Solaris: WARNING: can't open objset 73, error 122
Solaris: WARNING: Solaris: WARNING: can't open objset 73, error 5can't open objset 136, error 5

Solaris: WARNING: can't open objset 49, error 122
Solaris: WARNING: Solaris: WARNING: can't open objset 136, error 5can't open objset 73, error 5

Solaris: WARNING: can't open objset 49, error 122
Solaris: WARNING: can't open objset 136, error 122
Solaris: WARNING: can't open objset 49, error 122
Solaris: WARNING: can't open objset 73, error 122
 
What version of FreeBSD are you using? And how many disks did the Bigarray pool have, and in what configuration (raid-z, mirror, etc)?
 
Can you tell us a bit more about the pool? How many devices is it supposed to have?
This was built before I started here, I was kind of thrown into this position as I was just a help desk before all this. So I am learning as we go. I hope this answer your question.
Vendor: HP
Product: RAID 5
Revision: OK
User Capacity: 20,003,767,017,472 bytes [20.0 TB]
Logical block size: 512 bytes
Physical block size: 4096 bytes
Logical Unit id: 0x600508b1001c87d88bc4e10f45a2064a
Serial number: 50014380139A7FE0
Device type: disk
Local Time is: Tue May 23 09:01:49 2023 CDT
SMART support is: Unavailable - device lacks SMART capability.
 
I believe it is this. r4594 [FreeBSD 11.2-STABLE amd64]
Right. You know that's a version that's been EoL for quite some time? The entire 11 branch is end-of-life since September 2021.
 
Product: RAID 5
So, this is a pool that's on a hardware RAID 5? I hope you have some backups. There's no data redundancy within the ZFS pool. So there's nothing ZFS can do to 'fix' the issues.
 
Yeah, running ZFS on top of hardware RAID is bad idea.

DISCLAIMER: don't do this:
Check -F and -X options in relevant zpool import man page for your version.
DISCLAIMER: as above!
 
So, this is a pool that's on a hardware RAID 5? I hope you have some backups. There's no data redundancy within the ZFS pool. So there's nothing ZFS can do to 'fix' the issues.
Unfortunately, I am at a loss here. I guess I will just take my loss and learn from this.
 
if you have a hp ciss(4) controller you can smartctl as
smartctl -d cciss,$i /dev/ciss0 -a
replace $i with actual drive number (probaby 0,1,2,3..)
 
and learn from this.
The lesson to learn here is that it's generally a bad idea to put ZFS on a hardware RAID. The best configuration is to let ZFS handle the individual disks and use a HBA or (some RAID cards allow this) a JBOD configuration. If this was configured with a JBOD/HBA and ZFS's own RAID-Z then you could leverage not only the error detection of ZFS but also its error correction.
 
If the controller was used in RAID mode then you should be able to get semi-useful information about the array from the controller's BIOS screen.

We still don't know why this thing is unavailable after only one known failed disk.
 
Back
Top