Help with replacing a dead drive in ZFS

I had a drive go bad in my pool. I replaced the drive and resilvered but I'm not sure what's going on now.
It still says i'm in a degraded state. Can anyone make sense of this zpool status? I'm stumped.

Code:
[root@freenas] ~# zpool status
  pool: tank
 state: DEGRADED
  scan: resilvered 12.2G in 1h26m with 0 errors on Tue Jan 29 18:26:05 2013
config:

	NAME                                              STATE     READ WRITE CKSUM
	tank                                              DEGRADED     0     0     0
	  raidz1-0                                        ONLINE       0     0     0
	    da0                                           ONLINE       0     0     0
	    da1                                           ONLINE       0     0     0
	    da2                                           ONLINE       0     0     0
	    da3                                           ONLINE       0     0     0
	    da4                                           ONLINE       0     0     0
	    da5                                           ONLINE       0     0     0
	    da6                                           ONLINE       0     0     0
	  raidz1-1                                        DEGRADED     0     0     0
	    da7                                           ONLINE       0     0     0
	    da8                                           ONLINE       0     0     0
	    da9                                           ONLINE       0     0     0
	    replacing-3                                   DEGRADED     0     0     0
	      spare-0                                     DEGRADED     0     0     0
	        9518521840593257708                       OFFLINE      0     0     0  was /dev/da10
	        da14                                      ONLINE       0     0     0
	      gptid/1d76c1da-6996-11e2-9521-003048d690a0  ONLINE       0     0     0
	    da11                                          ONLINE       0     0     0
	    da12                                          ONLINE       0     0     0
	    da13                                          ONLINE       0     0     0
	spares
	  4568280119287525875                             INUSE     was /dev/dsk/da14

errors: No known data errors
 
ZFS has a habit of replaced drives getting stuck in the pool and never getting out of DEGRADED mode, even if the resilver completes. Don't know if this is FreeBSD specific or a ZFS problem, I thought it had been sorted.

Try removing that offline device first with detach, then see where that gets you.

Code:
# zpool detach tank 9518521840593257708
 
Awesome! That did the trick. Thanks!
Code:
[root@freenas] ~# zpool status
  pool: tank
 state: ONLINE
  scan: resilvered 12.2G in 1h26m with 0 errors on Tue Jan 29 18:26:05 2013
config:

	NAME                                            STATE     READ WRITE CKSUM
	tank                                            ONLINE       0     0     0
	  raidz1-0                                      ONLINE       0     0     0
	    da0                                         ONLINE       0     0     0
	    da1                                         ONLINE       0     0     0
	    da2                                         ONLINE       0     0     0
	    da3                                         ONLINE       0     0     0
	    da4                                         ONLINE       0     0     0
	    da5                                         ONLINE       0     0     0
	    da6                                         ONLINE       0     0     0
	  raidz1-1                                      ONLINE       0     0     0
	    da7                                         ONLINE       0     0     0
	    da8                                         ONLINE       0     0     0
	    da9                                         ONLINE       0     0     0
	    gptid/1d76c1da-6996-11e2-9521-003048d690a0  ONLINE       0     0     0
	    da11                                        ONLINE       0     0     0
	    da12                                        ONLINE       0     0     0
	    da13                                        ONLINE       0     0     0

This is probably dumb and pretty minor but is their a way to change
gptid/1d76c1da-6996-11e2-9521-003048d690a0
to
da10?
 
Welcome to the glorious, linux-y world of FreeBSD disk device naming where every release seems to add an additional way to access the same device.

Assuming there's nothing active using your pool (it'll unmount all the ZFS file systems) a quick export and import may do the trick.

Code:
# zpool export tank
# zpool import tank

If it's still adamant on using the gptid you can explicitly turn off /dev/gptid/ devices by adding the following to /boot/loader.conf and rebooting.**

Code:
kern.geom.label.gptid.enable=0

See http://freebsd.1045724.n5.nabble.com/ZFS-disk-names-and-gptid-under-8-2-RELEASE-td4029559.html, http://forums.freenas.org/archive/index.php/t-1072.html, http://forums.freenas.org/showthread.php?1075-Disable-gpt-gptid-labels

**Note this may cause issues with the FreeNAS GUI according to the last page linked above. It may be worth asking on the FreeNAS forum about this, we can't support that here.
 
I'm using the pool now for CCTV cameras so exporting and importing isn't really an option. Oh well thanks for the info!
 
nsdtech said:
This is probably dumb and pretty minor but is their a way to change
gptid/1d76c1da-6996-11e2-9521-003048d690a0
to da10?

Add the following to /boot/loader.conf and reboot:
Code:
kern.geom.label.gptid.enable="0"                # Disable the auto-generated GPT UUIDs for disks
kern.geom.label.ufsid.enable="0"                # Disable the auto-generated UFS UUIDs for filesystems

If you can't reboot, you should be able to "zpool offline" the drive, and then "zpool replace" it with the correct device name. If that works, it will update the zpool.cache file, which should prevent this from happening on the next zpool import:
Code:
# zpool offline tank gptid/1d76c1da-6996-11e2-9521-003048d690a0
# zpool replace tank gptid/1d76c1da-6996-11e2-9521-003048d690a0 /dev/da10
 
Back
Top