ZFS NAS4Free Lost Pool recovery

Hello All, I hope this is the correct spot for this. I first sought help at the nas4free forum as per a requirement I read somewhere on here, http://forums.nas4free.org/viewtopic.php?f=57&t=9631

The issue I am having: Can I somehow get a ZFS Pool working when it says a pool is unknown? When I am in the nas4free gui, I see the pool and my drives, none of which had gone bad, but I did screw up trying to add two new drives to increase the pool size and that is what lead me down this path of recovery. I have also tried 0'ing the two new drives that I added as well, but that did nothing.

As I am typing this, I am installing FreeBDS 10.2 stable.

What I would like to be able to do, is recover the data from the pool.

Attached is the current state.

Thank you in advance for any help that could be offered.
 

Attachments

  • storage3.JPG
    storage3.JPG
    34.8 KB · Views: 1,025
Seeing the output of zpool import in base FreeBSD (possible from the Live CD) would be useful.

Screwing up adding new disks then zeroing them sounds worrying. If you used zpool add, and the drives actually became part of the pool, then zeroed them, the pool will be unusable. The output of the import command should clear up exactly what state the pool is in.
 
Code:
pool: Pool2SAN
id: 4459461971523339288
state: UNAVAIL
status: One or more devices missing from the system.
action: The pool cannot be imported. Attach the missing devices and try again
Pool2SAN UNAVAIL missing device
mirror-0 ONLINE
diskid/DISK-WD-WCAT12730289 ONLINE
diskid/DISK-WD-WCAT10717389 ONLINE

Additional devices are known to be part of this pool, though their exact configuration cannot be determined.
 
I did leave off the bds zroot pool, because it only came up because of the freebsdFreeBSD setup.
 
Additional devices are known to be part of this pool, though their exact configuration cannot be determined.

A ZFS pool is made up of one or more 'top level vdevs'. Each top level vdev can be a mirror, raidz or a single disk. When you have more than one top level vdev, the data is striped across the vdevs similar to RAID0. As such you can't lose any top level vdev.

The message above says that there is at least one top level vdev in the pool that is completely missing.

I highly suspect you've done what I said in the last message, which is to add an additional top level vdev to the pool (using zpool add). Wiping that device (or devices) was a bad idea, as if ZFS thinks a top level vdev is missing it will just flatly list the pool as unavailable and refuse to import it. As far as the official line goes, your pool is done for. Some data may be recoverable with zdb but that is mainly a developer tool and has little to no documentation.
 
  • Thanks
Reactions: 829
I will say that I did not 0 it until well after this problem started. If it is gone, I am unhappy. I will try the zdb command. I can ftp to the system and browse through folders.
 
I just tried the zdb and it says
Code:
zdb
cannot open '/boot/zfs/zpool.cache': No such file or directory.
 
Obviously I can't comment on what happened up until the point we're at now. It's quite clear at the moment though that the pool has a functioning mirror, but additional disks have been added to the pool that are now missing. After adding the devices, but before wiping them I have no reason to believe the pool wouldn't be in a workable state unless the disks you added were faulty. Ideally we'd have booted stock FreeBSD and checked the output of zpool import before they were wiped. You can't beat seeing (and understanding) the basic pool status given by the zpool status and zpool import commands.

When I say zdb is a developer tool I really mean it. You won't be able to use it without help, and it's beyond me. It would require using that tool with various (mostly undocumented) options to pull raw ZFS records off the disk for some important files, then piecing those files back together by hand. If you had one or a couple of important files you wanted back and help from an expert that might be feasible, other than that probably not. I'm guessing you don't have a backup copy of the data? That would be far easier.
 
No, unfortunately one of the downsides of ZFS is how big and complex it is. It knows there are missing disks, and that data 'could' be striped across them, so it can't guarantee that your data is intact and refuses to import, even though your original mirror is still there and you may not have actually written any data after adding the new disks.
 
Darn, I wish there was a file browser for it.

No such thing because it's impossible to make heads or tails out of the damaged data structures that are potentially missing large parts that would normally make up the filesystem metadata and the actual file contents. That's why recovering a damaged filesystem, any filesystem, can be a tough task. ZFS is a special case being even harder to recover because it's also a volume manager, not just a simple filesystem.
 
That is the issue, I am not missing anything. I tried to grow a pool by adding two new drives, in their own virtual device, it did not work and I am left with a good mirror that is inaccessible. I am about to the point to where I may consider trying to pay for someone to do it. I messed this up, but I do not believe I will be trying zfs again, because I set this up in a mirror to avoid losing data, but here I am.
 
It looks like I will be able to recover my data using some recovery software. My presumption is, it found the VM NTFS partition. I will report back once complete.
 
Back
Top