[FreeNAS] Need help restoring a zpool from existing drives

Greetings everyone,

My situation is this:
  1. I'm running FreeNAS with two RAID-Z2 volumes.
  2. I found one day that I was not able to access any of my volumes.
  3. When I looked at the monitor attached to the server it said there was a kernel panic.
  4. I rebooted the server and a message hung at "loading operating system".
  5. I discovered that the USB drive was completely dead.
  6. I installed a fresh copy of FreeNAS 9.1.1 on a a new USB drive.
  7. Auto-Import was able to find one of my volumes but not the second.

I've also posted on the FreeNAS forums and am awaiting responses there.

After combing through the forums I've seen scenarios where people have destroyed zpools, had one corrupt disk, had degraded or unavailable zpools, etc. but my situation is not any of those.

The drives for all I can tell are in prefect working order. The data should be intact.

The problem (I think) is that FreeNAS Auto-Import can't find the zpool that was used to create the ZFS volume across the second set of drives (six in total). The zpool also does not appear when I execute #zpool list.

So my question is can any one tell me if there is a way to recover a zpool from a known healthy set of drives? Or is it possible to create a zpool "place holder" and manually attach drives to it?

As for my technical expertise I'm quite comfortable poking around inside Linux directory structures, installing packages and configuring services on Linux but I'm still very much a novice in terms of troubleshooting, and very very new to ZFS and FreeNAS. All this is to say I'm willing and able to perform any set of instructions to get my ZFS volume back again.

I know the contents of this thread has seen many flavours in many forums, but I'm really hoping that this situation is different and recoverable.

Thanks for reading my thread, all suggestions and feedback are welcome.

~ R

For reference here is some output from my system:

Code:
[root@freenas] /mnt# camcontrol devlist
<WDC WD20EARS-22MVWB0 51.0AB51>    at scbus0 target 0 lun 0 (ada0,pass0)
<ST3000DM001-1CH166 CC24>          at scbus1 target 0 lun 0 (ada1,pass1)
<WDC WD20EARS-22MVWB0 51.0AB51>    at scbus2 target 0 lun 0 (ada2,pass2)
<WDC WD20EARS-00MVWB0 50.0AB50>    at scbus3 target 0 lun 0 (ada3,pass3)
<Marvell Console 1.01>            at scbus7 target 0 lun 0 (pass4)
<ST3000DM001-1CH166 CC24>          at scbus8 target 0 lun 0 (ada4,pass5)
<ST3000DM001-1CH166 CC26>          at scbus10 target 0 lun 0 (ada5,pass6)
<Marvell Console 1.01>            at scbus15 target 0 lun 0 (pass7)
<SAMSUNG HD204UI 1AQ10001>        at scbus16 target 0 lun 0 (ada6,pass8)
<ST3000DM001-1CH166 CC24>          at scbus17 target 0 lun 0 (ada7,pass9)
<ST3000DM001-1CH166 CC24>          at scbus18 target 0 lun 0 (ada8,pass10)
<ST3000DM001-1CH166 CC26>          at scbus19 target 0 lun 0 (ada9,pass11)
<WDC WD20EARS-00J2GB0 80.00A80>    at scbus20 target 0 lun 0 (ada10,pass12)
<WDC WD10EALX-009BA0 15.01H15>    at scbus22 target 0 lun 0 (ada11,pass13)
< Patriot Memory PMAP>            at scbus30 target 0 lun 0 (pass14,da0)

Code:
[root@freenas] /mnt# gpart show
=>        34  3907029101  ada0  GPT  (1.8T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  3902834703    2  freebsd-zfs  (1.8T)
 
=>        34  5860533101  ada1  GPT  (2.7T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  5856338696    2  freebsd-zfs  (2.7T)
  5860533128          7        - free -  (3.5k)
 
=>        34  3907029101  ada2  GPT  (1.8T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  3902834703    2  freebsd-zfs  (1.8T)
 
=>        34  3907029101  ada3  GPT  (1.8T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  3902834703    2  freebsd-zfs  (1.8T)
 
=>        34  5860533101  ada4  GPT  (2.7T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  5856338696    2  freebsd-zfs  (2.7T)
  5860533128          7        - free -  (3.5k)
 
=>        34  5860533101  ada5  GPT  (2.7T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  5856338696    2  freebsd-zfs  (2.7T)
  5860533128          7        - free -  (3.5k)
 
=>        34  3907029101  ada6  GPT  (1.8T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  3902834703    2  freebsd-zfs  (1.8T)
 
=>        34  5860533101  ada7  GPT  (2.7T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  5856338696    2  freebsd-zfs  (2.7T)
  5860533128          7        - free -  (3.5k)
 
=>        34  5860533101  ada8  GPT  (2.7T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  5856338696    2  freebsd-zfs  (2.7T)
  5860533128          7        - free -  (3.5k)
 
=>        34  5860533101  ada9  GPT  (2.7T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  5856338696    2  freebsd-zfs  (2.7T)
  5860533128          7        - free -  (3.5k)
 
=>        34  3907029101  ada10  GPT  (1.8T)
          34          94        - free -  (47k)
        128    4194304      1  freebsd-swap  (2.0G)
    4194432  3902834703      2  freebsd-zfs  (1.8T)
 
=>        34  1953525101  ada11  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304      1  freebsd-swap  (2.0G)
    4194432  1949330703      2  freebsd-ufs  (929G)
 
=>      63  15515585  da0  MBR  (7.4G)
        63  1930257    1  freebsd  [active]  (942M)
  1930320        63      - free -  (31k)
  1930383  1930257    2  freebsd  (942M)
  3860640      3024    3  freebsd  (1.5M)
  3863664    41328    4  freebsd  (20M)
  3904992  11610656      - free -  (5.5G)
 
=>      0  1930257  da0s1  BSD  (942M)
        0      16        - free -  (8.0k)
      16  1930241      1  !0  (942M)

Code:
[root@freenas] /mnt# glabel status
                                      Name  Status  Components
gptid/b011fd83-c0a2-11e0-897e-00e018e48620    N/A  ada0p2
gptid/862bdfcf-c9a0-11e2-bad5-50e549c7ad3f    N/A  ada1p2
gptid/ae94e17b-c0a2-11e0-897e-00e018e48620    N/A  ada2p2
gptid/af78266b-c0a2-11e0-897e-00e018e48620    N/A  ada3p2
gptid/879db021-c9a0-11e2-bad5-50e549c7ad3f    N/A  ada4p2
gptid/880590f8-c9a0-11e2-bad5-50e549c7ad3f    N/A  ada5p1
gptid/881d609c-c9a0-11e2-bad5-50e549c7ad3f    N/A  ada5p2
                            gpt/swap-ada4    N/A  ada6p1
gptid/b082960b-c0a2-11e0-897e-00e018e48620    N/A  ada6p1
gptid/b08e3a3a-c0a2-11e0-897e-00e018e48620    N/A  ada6p2
gptid/8707ecde-c9a0-11e2-bad5-50e549c7ad3f    N/A  ada7p1
gptid/872106d2-c9a0-11e2-bad5-50e549c7ad3f    N/A  ada7p2
gptid/868caa9f-c9a0-11e2-bad5-50e549c7ad3f    N/A  ada8p1
gptid/86a578d5-c9a0-11e2-bad5-50e549c7ad3f    N/A  ada8p2
gptid/8595363a-c9a0-11e2-bad5-50e549c7ad3f    N/A  ada9p1
gptid/85b02854-c9a0-11e2-bad5-50e549c7ad3f    N/A  ada9p2
                            gpt/swap-ada0    N/A  ada10p1
gptid/adced8e5-c0a2-11e0-897e-00e018e48620    N/A  ada10p1
gptid/ade611e2-c0a2-11e0-897e-00e018e48620    N/A  ada10p2
gptid/7463b438-dda3-11e2-b371-50e549c7ad3f    N/A  ada11p1
                    ufsid/51c9a8f78ba54025    N/A  ada11p2
                            ufs/Downloads    N/A  ada11p2
gptid/74720125-dda3-11e2-b371-50e549c7ad3f    N/A  ada11p2
                            ufs/FreeNASs3    N/A  da0s3
                            ufs/FreeNASs4    N/A  da0s4
                            ufs/FreeNASs1a    N/A  da0s1a
 
First: be careful about confusing FreeBSD with Linux. While many things are similar, many are different, and it can be dangerous to assume they work the same.
 
@wblock@ - I do understand that FreeBSD is not the same as Linux, I just wanted to let people know I'm not afraid to get my hands a little dirty :)

@fonz - I understand this is a FreeBSD forum; I'm hoping the FreeNAS community can come through for me, but if I get good advice from the folks here I won't turn it away.

Cheers.
 
Last edited by a moderator:
Hi @J65nko,

That's just the problem, that command returns nothing now, it just returns to the command prompt.
 
Last edited by a moderator:
I was doing some more research tonight and it seems that the ZFS drives themselves should contain metadata describing the zpool that they belong to, thus enabling the pool to be recreated.

Clearly this was the case for the first volume which was automatically imported, but not so for the second volume. Perhaps there's a way to repair the metadata on the drives?

Just thinking out loud here...
 
rhing said:
Or is it possible to create a zpool "place holder" and manually attach drives to it?

I would not recommend attempting that unless you have backups.

zpool import -c could be a thing to try if you have access to your original zpool.cache file and haven't moved any drives around, but I'm assuming that is on your dead USB drive (it lives in /boot/zfs).
 
@bthomson Thanks for the info; along those lines I'm getting the impression from other forums and posts that my only chance to get my ZFS volume back is to somehow recover the memory from my USB drive.

I'll take it into a shop and see if they can pull the memory and read it. If I'm lucky the only thing that failed was the NAND controller inside and the memory is still intact.
 
Last edited by a moderator:
rhing said:
I installed a fresh copy of FreeNAS 9.1.1 on a a new USB drive.

Is that the same version of FreeNAS that you were running before?

rhing said:
So my question is can any one tell me if there is a way to recover a zpool from a known healthy set of drives? Or is it possible to create a zpool "place holder" and manually attach drives to it?

It isn't known that the drives are healthy. The placeholder idea would destroy any existing pool on the drives.

Have you ran any other commands that you neglected to mention?

rhing said:
For reference here is some output from my system:

Which drives belong to which pools?

# zpool status

To see the drives that are part of the imported pool. What is the deal with the GPT labels on ada6p1 and ada10p1? It would be instructive to see a label from a disk from the missing pool.

# zdb -l /dev/adaXp2

Change for the appropriate disk. Only post the final label, but you should check that they are self-consistent with each other.
 
According to zpool(8) you either use a cachefile or a directory specification as a hint:
Code:
zpool import [-d dir | -c cachefile] [-D]

	 Lists pools available to import. If the -d option is not specified,
	 this command searches for devices in "/dev".  The -d option can be
	 specified multiple times, and all directories are searched. If the
	 device appears to be part of an exported pool, this command displays
	 a summary of the pool with the name of the pool, a numeric identi-
	 fier, as well as the vdev layout and current health of the device for
	 each device or file.  Destroyed pools, pools that were previously
	 destroyed with the "zpool destroy" command, are not listed unless the
	 -D option is specified.

	 The numeric identifier is unique, and can be used instead of the pool
	 name when multiple exported pools of the same name are available.

	 -c cachefile
		 Reads configuration from the given cachefile that was created
		 with the "cachefile" pool property. This cachefile is used
		 instead of searching for devices.

	 -d dir  Searches for devices or files in dir.	The -d option can be
		 specified multiple times.

	 -D	 Lists destroyed pools only.

Code:
[cmd=#]zpool import[/cmd]
   pool: ypool
     id: 5414113771999807301
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        ypool       ONLINE
          raidz1-0  ONLINE
            md1p2   ONLINE
            md2p2   ONLINE
            md3p2   ONLINE
Although the pool was created with labelled partitions, it shows the partition numbers.
The labels:
Code:
[cmd]ls -l /dev/gpt[/cmd]
crw-r-----  1 root  operator  0xa6 Sep 20 19:43 mdboot1
crw-r-----  1 root  operator  0xac Sep 20 19:43 mdboot2
crw-r-----  1 root  operator  0xb2 Sep 20 19:43 mdboot3
crw-r-----  1 root  operator  0xa4 Sep 20 19:43 mdisk_1
crw-r-----  1 root  operator  0xaa Sep 20 19:43 mdisk_2
crw-r-----  1 root  operator  0xb0 Sep 20 19:43 mdisk_3
To use these labels I specify the label directory with the -d option
Code:
[cmd=#]zpool import -d /dev/gpt[/cmd]
   pool: ypool
     id: 5414113771999807301
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        ypool            ONLINE
          raidz1-0       ONLINE
            gpt/mdisk_1  ONLINE
            gpt/mdisk_2  ONLINE
            gpt/mdisk_3  ONLINE
You also could try the -D option to check whether the missing pool was accidentally destroyed.
 
Back
Top