Solved Misconfigured zpool

Hi,
I did a major error here and can’t find the solution. I have a misconfigured zpool. I accidentally added a drive in parallel with a zpool and can’t remove it.

How can I fix this?

Code:
        NAME                        STATE     READ WRITE CKSUM
        Tank                         DEGRADED     0     0     0
          raidz2-0                  DEGRADED     0     0     0
            gpt/disk0              ONLINE       0     0     0
            gpt/disk6              ONLINE       0     0    43
            gpt/disk3              ONLINE       0     0     0
            gpt/disk4              ONLINE       0     0     0
            gpt/disk5              ONLINE       0     0     0
          gpt/disk8                ONLINE       0     0     0
 
Explain "can't remove it". Which command did you use? What happens when you try? What is the error message?

In theory it should be removable. What you have here is a mirror, with one side of the mirror being a single disk gpt/disk8, and the other side being a RAIDZ2 of 5 disks. But there is a problem with that theory: The RAIDZ2 half of the mirror is degraded. I can't see a reason for that. Can you give us some background what happened to it, and why it might be degraded? I don't know whether you can remove one half of a mirror if the other half is degraded.

I also wonder why one of the disks has 43 checksum errors. That's kind of suspicious. Anything about that in log files, or an idea what night have caused it?
 
Hi,
I did a major error here and can’t find the solution. I have a misconfigured zpool. I accidentally added a drive in parallel with a zpool and can’t remove it.

How can I fix this?

I fear, not at all. If You add a disk in such way, ZFS will be happy that the pool got bigger and start to distribute data to both vdevs. So, unless I missed some innovation, the solution is, recreate pool and restore from backup.

In theory it should be removable. What you have here is a mirror
Not really. :(
 
Darn it! Thank you PMc, you are right. ZFS took the new disk and simply added it to the pool as another non-redundant disk. If it had been mirrored, the output of "zpool status" would have had a line for "mirror" in it. Sorry about being asleep.
 
You could try using zpool remove: zpool(8)
Removes the specified device from the pool. This command currently
only supports removing hot spares, cache, log devices and mirrored
top-level vdevs (mirror of leaf devices); but not raidz.

I myself have not yet successfully applied the command, although the man page states that it should work for "leaf devices", which I believe is your case.
When I tried the zpool remove command it did not work for me.

If zpool remove does not work, then you're out of luck. You should backup and recreate the pool, then import the data from the backup.
Trying to reverse the "add" command is major hacking and very risky, unless you are a ZFS guru (but judging by your posting here, you're probably not).
 
One must learn the hard way I guess.... This is the only way I saw as well; re-create the pool.
In the meantime I will mirror the drive so I get some redundancy until I can take the system down.
I always thought that mistakes could be easily fixed but I was wrong.... I really should have re-read my command before issuing it.

The CRCs are due to a repeated attempt to get the pool online while resilvering. It will be corrected during a scrub which I will force.
 
One must learn the hard way I guess.... This is the only way I saw as well; re-create the pool.
In the meantime I will mirror the drive so I get some redundancy until I can take the system down.
I always thought that mistakes could be easily fixed but I was wrong.... I really should have re-read my command before issuing it.
Yeah. Most commands don't do irreversible harm, but when you're playing with zpool, definitely type slowly and read twice!
Also destroying datasets cannot be reversed, so definitely be carefull there too.

The CRCs are due to a repeated attempt to get the pool online while resilvering. It will be corrected during a scrub which I will force.
It's probably a bad idea to do a scrub in this state. First, scrubbing would be unnecessary because it will be automatically done when you copy your data to the new ZFS pool. Second, your pool is striped across the second VDEV which has no redundancy whatsoever. This means that if your gpt/disk8 goes belly up, your whole pool will be wasted.
In your shoes I would do the following:
  1. Create a second ZFS pool big enough to receive all the data (either a backup drive or your next production devices). Consider the necessary redundancy level, for example - mirror it or use RAIDZ.
  2. Do a zfs send | zfs receive and copy your ZFS datasets to the second pool.
  3. Either use the second pool in production, or format the original drives and send the datasets back.
Tinkering with the pool as it is is risky, because the second VDEV is not redundant.
 
The drive has been mirrored.
I intend to transfer the data to another mirror before re-creating the raid which should be fine. We do not have that much data but like any, it is vital.
 
Then make sure all VDEVs have enough redundancy (especially gpt/disk8 needs the mirroring). And you should be just fine.
 
Just to close this thread; I have rebuilt the array to correct the problem. My misconfigured Z2 array is back being a Z2 array.
 
Back
Top