Solved Did I screw up? raidz3 drive replaced ended up as da2 instead of da2p3

If anyone knows how to remedy this I would be grateful. I am sure it is something easy for you guys.

This is a remote machine in a datacenter in the other side of the US so I have no way to physically be there. It is using FreeBSD 11.1. I will upgrade to 11.2 once this pool is corrected. Using raidz3 as root. This is an 8 slot SAS using LSI 9811-8i. No spare slots or drives inside.

Basically a drive went unavailable. I removed it and replaced it with a new drive and offlined the old drive.

Code:
[root@fbsd1 ~]# zpool status
  pool: tank
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 1.37M in 0h0m with 0 errors on Thu Dec 27 10:55:44 2018
config:

        NAME                     STATE     READ WRITE CKSUM
        tank                     DEGRADED     0     0     0
          raidz3-0               DEGRADED     0     0     0
            da0p3                ONLINE       0     0     0
            da1p3                ONLINE       0     0     0
            XXXXXXXXXXXXXXXXXX69 OFFLINE      0     0     0  was /dev/da2p3
            da3p3                ONLINE       0     0     0
            da4p3                ONLINE       0     0     0
            da5p3                ONLINE       0     0     0
            da6p3                ONLINE       0     0     0
            da7p3                ONLINE       0     0     0

I copied over the partition information to the new drive.

gpart backup da0 | gpart restore -F da2

gpart show displays this.

Code:
...
=>       40  585937424  da7  GPT  (279G)
         40       1024    1  freebsd-boot  (512K)
       1064        984       - free -  (492K)
       2048    4194304    2  freebsd-swap  (2.0G)
    4196352  581740544    3  freebsd-zfs  (277G)
  585936896        568       - free -  (284K)

=>       40  585937416  da2  GPT  (279G)
         40       1024    1  freebsd-boot  (512K)
       1064        984       - free -  (492K)
       2048    4194304    2  freebsd-swap  (2.0G)
    4196352  581740544    3  freebsd-zfs  (277G)
  585936896        560       - free -  (280K)
...

I tried a "zpool online" but it didn't work. It said I had to use a replace. I did this.

zpool replace tank XXXXXXXXXXXXXXXXXX69 da2

It started to resilver and when it was done, "zpool status" showed this.

Code:
[root@fbsd1 ~]# zpool status
  pool: tank
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 1.37M in 0h0m with 0 errors on Thu Dec 27 10:55:44 2018
config:

        NAME                     STATE     READ WRITE CKSUM
        tank                     ONLINE       0     0     0
          raidz3-0               ONLINE       0     0     0
            da0p3                ONLINE       0     0     0
            da1p3                ONLINE       0     0     0
            da2                  ONLINE       0     0     0 
            da3p3                ONLINE       0     0     0
            da4p3                ONLINE       0     0     0
            da5p3                ONLINE       0     0     0
            da6p3                ONLINE       0     0     0
            da7p3                ONLINE       0     0     0

When I do a gpt show, da2 does not show up in the list. I felt this config was wrong so I offlined da2 and did a "gpt destroy -F da2"

I then redid the partition copy again and tried to add the drive properly (what I thought was properly).

zpool replace tank 9163403643081195644 da2p3

It responds.

Code:
[root@fbsd1 ~]# zpool replace tank 9163403643081195644 da2p3
invalid vdev specification
use '-f' to override the following errors:
/dev/da2p3 is part of active pool 'tank'

Using the override switch.

zpool replace -f tank 9163403643081195644 da2p3

Code:
[root@fbsd1 ~]# zpool replace -f tank 9163403643081195644 da2p3
invalid vdev specification
the following errors must be manually repaired:
/dev/da2p3 is part of active pool 'tank'

Here is where I am stuck. I tried to re-zero the entire drive as I thought it could be zfs metadata on the drive or something. Didn't work.

If anyone knows how I can put this new drive the way it is supposed to be then I would appreciate it. I could have the drive replaced with another drive but I don't believe that would be necessary. There has to be a way to use the new drive that is already in there.

Thanks again all.
 
I think, where you went wrong is
Code:
zpool replace tank  XXXXXXXXXXXXXXXXXX69 da2

That takes the whole disk destroying the partioning.
What to do? remove the disk, repartition the disk, add it again as partition.
disclaimer: I did not try this, just an idea
Code:
zpool remove tank da2
# pool now becomes degraded
gpart backup da0 | gpart restore -F da2
# partitioning redone, you might want to check this
gpart show da2
# if all's well add the partition
zpool replace tank 9163403643081195644 da2p3
# pool will start resilver
 
I think remove is for cache or hot spare. Detach is used for mirrors I believe. This is a raidz3. maybe I will try with a test machine.
 
Ok I got it to work! I had to clear out the last bit of zfs metadata off the drive (the end of it).

I used:

Code:
dd if=/dev/zero of=/dev/da2 bs=1m oseek=`diskinfo da2 | awk '{print int($3 / (1024*1024)) - 4;}'`

I found this tucked into a FreeNAS thread. Thanks so much everyone!
 
Back
Top