[FreeNAS] Replacing a dead drive in zfs

It was taking forever to connect to my FreeNAS. It was never showing that there was an issue through SMART. I went by sound and replaced a drive via the GUI. It all seemed to go smoothly then I noticed that it was still degraded. I next ran zpool status -v. Here is the output:

Code:
[root@tank] /boot# zpool status -v
  pool: Tank
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

        NAME                                              STATE     READ WRITE CKSUM
        Tank                                              DEGRADED     0     0     0
          raidz1                                          DEGRADED     0     0     0
            gptid/a9cdc49f-0c39-11e2-9903-001676d6a98b    ONLINE       0     0     0
            gptid/aa64809a-0c39-11e2-9903-001676d6a98b    ONLINE       0     0     0
            replacing                                     DEGRADED     0     0     2
              gptid/aafd2bc1-0c39-11e2-9903-001676d6a98b  OFFLINE      0     0     0
              gptid/03f81b29-5549-11e2-baec-001676d6a98b  ONLINE       0     0     0
            gptid/ab929715-0c39-11e2-9903-001676d6a98b    ONLINE       0     0     0
            gptid/ac251f8d-0c39-11e2-9903-001676d6a98b    ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

This is on a FreeNAS 8.2 box with 5 - 2 terabyte drives in zfs raidz. The drive silvered properly from what I can tell. After replacing this drive I find that ada3 is the real problem.

What is my next course of action? I need to get the RAID back then take out drive ada3 and replace it with another I have here.

Thanks in advance.
 
Code:
# zpool detach Tank gptid/aafd2bc1-0c39-11e2-9903-001676d6a98b
# zpool scrub Tank

You may have to delete any files that it is saying are corrupt, as well as any snapshots that reference those files. (Then restore them from backup if required/possible)
 
Code:
[root@tank] /boot# zpool detach Tank gptid/aafd2bc1-0c39-11e2-9903-001676d6a98b
cannot detach gptid/aafd2bc1-0c39-11e2-9903-001676d6a98b: no valid replicas
 
Interesting, it apparently can't do without a drive that is already offline...
ZFS used to have a problem with offlining/removing disks even if there was enough redundancy in place, although I thought this was fixed before 8.2-RELEASE.

I personally would recommend attempting to import and fix the pool on a standalone FreeBSD 8.3 install (Or possibly even a FreeNAS 8.3 install).
 
I am down for anything at this point. If moving over to 8.3 and import and repair is a direction I'm down to listen. How difficult is it to import and fix? I can just overwrite the USB I have with 8.2 to 8.3 or would you do an upgrade?

Thanks.
 
I would install a new copy of 8.3 to the USB stick.

Technically, you should just be able to import the pool in the normal way although we don't know much about FreeNAS here, we use straight FreeBSD, hence the warning from DutchDaemon above.

Code:
# zpool import Tank
 
Code:
[root@freenas] ~# zpool import Tank
cannot mount '/Tank': failed to create mountpoint
cannot mount '/Tank/FTP': failed to create mountpoint
cannot mount '/Tank/Music': failed to create mountpoint
cannot mount '/Tank/Pictures': failed to create mountpoint
cannot mount '/Tank/Software': failed to create mountpoint
cannot mount '/Tank/Training': failed to create mountpoint
cannot mount '/Tank/VideoTest': failed to create mountpoint
cannot mount '/Tank/Videos': failed to create mountpoint
[root@freenas] ~#
 
This is exactly why any advice given here should not be considered accurate or reliable. The zpool import should have worked if you were on a stock FreeBSD and the root filesystem was read/write as it would have been. On FreeNAS such assumptions may not hold true.
 
Well it looks to have imported the pool at least, it just can't create mountpoints for the file systems. You can use the -R option to zpool import to mount them somewhere writeable.

Anyway, what does a zpool status say at this point?
 
Back
Top