ZFS ZFSz2 pool Faulted Unavailable cannot open

Hello

I tried a fresh install of Truenas scale (debian) and imported the pool from Truenas Core (FreeBSD) and it broke the pool disks somehow.

It's a raidz2 configuration 6 disks.

The 2 unavailable drives here in Core (freebsd), ARE AVAILABLE when I boot with Truenas Scale (debian), but some of the others are not.
In CORE I get 4 online and 2 unavailable
in Scale I get 3 online and 1 unavailable an 2 label problem

Is there some way to bring back the 2 unavailable back to life in Core FreeBSD?

zpool import output in FreeBSD based OS:
oot@truenas[~]# zpool import
pool: tank
id: 1482594721782283518
state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:
tank FAULTED corr
upted data
raidz2-0 DEGRADED
disk/by-partuuid/c43b21c4-0805-11ec-8d17-74d4350b5e08 UNAVAIL cannot open
gptid/c4c0c266-0805-11ec-8d17-74d4350b5e08 ONLINE
gptid/c4f2ee7f-0805-11ec-8d17-74d4350b5e08 ONLINE
gptid/c4ce501d-0805-11ec-8d17-74d4350b5e08 ONLINE
disk/by-partuuid/c4fc1ca1-0805-11ec-8d17-74d4350b5e08 UNAVAIL cannot open
gptid/c50fa01f-0805-11ec-8d17-74d4350b5e08 ONLINE

thank you for your help!
 
The output of command below gives more info,
Code:
zpool status  -vsP
doesn't even show the pool, only the boot pool shows up

zpool import -f tank
gives: cannot import "tank" I/O error

there are 4 online disks and they aren't enough to import the pool.

replace disk, doesn't work since the pool isn't imported.
 
Do you suspect a harddrive device error i.e.damaged ?,
or do you try to import "a diskdrive with higher version of zfs" into "an OS with lower zfs version ?"
 
Do you suspect a harddrive device error i.e.damaged ?,
or do you try to import "a diskdrive with higher version of zfs" into "an OS with lower zfs version ?"
I don't think its hardware error, I think it was a failed update/upgrade. I had TrueNas Core installed and then I said I wanna try Truenas Scale on another SSD to see how it worked. I imported the pool into Scale and nothing worked after that.

Scale that works on debian must have done something with some drives, because some are only accessible by debian and some are only accessible by freebsd.

from my first pos, you can see 4 disks by gptid, those can be seen in freebsd, the other two can be seen online by scale.

its all weird hehe
 
When switching disks between different operating systems it can be a good idea to check zpool/zfs versions.
For the version of zfs on the disk:
Code:
zpool get version myzpool
zfs get version myzpool
zpool get all myzpool | grep feature

For the version of zfs on the OS , source operating system and destination operating system
Code:
zpool version

Very obvious there is backward compatibility and limited forward compatibility.
 
Debugging suggestion: Go back to the OS that works (the debian-based one), and make sure all required disks are functioning. Then identify the disks. For example, write down "the physical disk that contains ZFS volume Adam is physically a Seagate model 12345, it is the leftmost one, and it has hardware serial number 98765. The next one contains volume Bob, is a Hitachi ABCDE with serial number 31415, and is in the external case with the orange activity light". And so on. Write down how many physical disks you should have, and exactly their models/serial numbers. If you have spare time, also write down their partition tables, in particular the sizes and types of all partisions.

Then reboot into the OS that does not work (TrueNAS), and look for the disks. Make sure you get the correct number of disks (in FreeBSD, I would look at the dmesg output, and then use "camcontrol identify/inquire" to identify them. Look at the partition tables. If you don't know which disk is which, check serial numbers and manufacturers.

If all disks are really there, then try reading them. Actually, try reading the partitions containins ZFS data: "dd if=/dev/daXpY bs=1048576 ...", and make sure you get reasonable throughput and no errors.

If you get this far and it everything looks good, but ZFS refuses to cooperate, the problem is internal to ZFS. If you don't get here, you have a problem, and you are most of the way there to diagnosing it.
 
Back
Top