Solved ZFS has grabbed a raw disk

I have shot myself in the foot just now. My daily machine is running 12.0 on a zfs pool that has just one disk. It originally was used to install 11.1 and the installer partitioned the drive such that the zpool is on /dev/ada0p3. I chose 'zfs on root' during the install. Not sure about p1 but p2 is swap.

I picked up a couple SSDs to add to the mirror with one being a spare. To test I added one of them with
zpool add zroot /dev/ada1
and it added the entire disk, listing it as /dev/ada1, unlike /dev/ada0p3, the other half of the vdev. From the web it appears I cannot pull ada1 out of the mirror, and detach does indeed do nothing. What might a fix here? I can silver ada1, plug it in as ada0, and then add the old ada0 to the mirror. I'm really just fearing borking the original install setup and that p1-2-3 partition arrangement. I really don't want to reinstall.
 
You can certainly remove it from the mirror with zpool detach. It is no surprise it used the whole drive, given that is what you told it to do.

If you want to replicate your partitions on the new drive (likely what you want such that if ada0 dies, you have bootcode that will work off of ada1) you will need to create it with gpart(8). I believe you can use the backup and restore commands within gpart to “backup” the layout of ada0, and “restore” it to ada1. You will need to separately install the bootcode on ada1p1.

Before doing anything, posting “zpool status -v” and “gpart show” outputs can help people give more complete suggestions.
 
You can certainly remove it from the mirror with zpool detach. It is no surprise it used the whole drive, given that is what you told it to do.

If you want to replicate your partitions on the new drive (likely what you want such that if ada0 dies, you have bootcode that will work off of ada1) you will need to create it with gpart(8). I believe you can use the backup and restore commands within gpart to “backup” the layout of ada0, and “restore” it to ada1. You will need to separately install the bootcode on ada1p1.

Before doing anything, posting “zpool status -v” and “gpart show” outputs can help people give more complete suggestions.

Thank you for replying. Below is the output requested. I was just going to reinstall and be done with it since it will not boot with ada0 unplugged; probably because ada1 has no bootcode. So I think I'll try to fix this, it will be educational.
zpool status -v

pool: zroot
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0 in 0 days 00:06:37 with 0 errors on Mon Aug 13 20:42:57 2018
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
ada0p3.eli ONLINE 0 0 0
ada1 ONLINE 0 0 0

errors: No known data errors

gpart show

=> 40 488397088 ada0 GPT (233G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 484200448 3 freebsd-zfs (231G)
488396800 328 - free - (164K)
 
I was wondering if I misread something but I didn't, you didn't create a mirror but a stripe which could be tricky. Did you write any massive amounts of data onto this pool?

The problem is that the detach command is valid for a mirror, but that's not what you have. What you could try is: # zpool remove zroot ada1.
 
I was wondering if I misread something but I didn't, you didn't create a mirror but a stripe which could be tricky. Did you write any massive amounts of data onto this pool?

The problem is that the detach command is valid for a mirror, but that's not what you have. What you could try is: # zpool remove zroot ada1.

You're probably right; that second disk makes it JBOD. Everything is backed up. Everything. When I'm done tonight I'll try the detach, but my enthusiasm is waning.
Thanks!
 
You're probably right; that second disk makes it JBOD. Everything is backed up. Everything. When I'm done tonight I'll try the detach, but my enthusiasm is waning.
Thanks!

You're right. I didn't even notice that the entry for mirror0 is missing from the output. Ill try the detach; I've written nothing to the system yet.
 
If you’re feeling adventurous, since you are on 12, you can upgrade your pool (to support the new feature) and then use zpool remove to remove the top-level vdev (stipe member.)

[edit: typo]
 
Not sure about p1 but p2 is swap.
The p1 is usually a freebsd-boot or efi partition and which one you have depends on the way the machine is booted, a traditional BIOS boot or an UEFI boot.

If you replace a disk from the boot mirror, don't forget to create the boot partitions. You need it to boot the system should the primary disk fail.
 
Thanks to everyone for their posts.

Since I bought two SSDs I just put them in and did a new install of 12.0. Stuff is over from backup and I'm pretty much 100%. This leaves me an SSD on which to experiment. I was having problems trying to get an install process to boot into the install screen because it kept running into the geli password business from my first try. This appeared to be impassible until I loaded a Slackware CD and dd'ed a bit of /dev/zero the SSDs. That did it. Second, I tried too many times to do 'bios+efi' install. They would complete but then not boot to a login screen. I guess my machine is not ready for a combo but its bios does offer uefi choices (HP 8200 Elite, Core i7-2600) so who knows. So I chose just bios in the installer program and all then went very well. At boot I get the 'Failed to read Pad2 area of Primary vdev' 'error' but the web says that's noise and it hasn't interfered with the machine's operation. I'm just going to let everything settle in for a while. Next step is to do an install on the remaining SSD so it duplicates the partition layout of the current mirror, delete all the files on it and set it aside with instructions to remember to add /dev/adaXp3 and not /dev/adaX. Or learn to create the boot partition manually, certainly the better choice. Exciting times. Thanks again for the education.

pool: zroot
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3.eli ONLINE 0 0 0
ada1p3.eli ONLINE 0 0 0

errors: No known data errors
 
The p1 is usually a freebsd-boot or efi partition and which one you have depends on the way the machine is booted, a traditional BIOS boot or an UEFI boot.

If you replace a disk from the boot mirror, don't forget to create the boot partitions. You need it to boot the system should the primary disk fail.
Just to be clear here, I assume that e.g., both disks in a mirror, will have boot code installed on them by the initial installation process . In that first 512k partition. Is that right, or is it only on say ada0 and not on ada1. I wondered about this since during my install carnival I could boot the SSD by booting an install CD, but not without one. I might be mistaken, I was sometimes rather frazzled. Thanks.
 
That's a hoe lots of people step on :D
Especially if you have been working with geom mirror before.
Please mind the difference between zpool add/remove and zpool attach/detach.
What zpool attach is doing is what actually gmirror add is doing so it can be confusing. However geom mirror don't have such thing as attach.
 
What zpool attach is doing is what actually gmirror add is doing so it can be confusing.
Not by definition, it heavily depends on context (= the ZFS pool you're working with). gmirror will always add a disk to a mirror, but zpool attach only adds the disk to a pool, no matter if that pool happens to be a mirror, raidz or even a stripe.

So within that perspective these commands definitely do not perform the same actions.
 
What zpool add does is adding vdev to the pool. By adding vdev it expands the size by striping. You must know that within the zfs pool there's no redundancy between vdevs. The redundancy may be only inside a vdev.
What zpool attach does is mirroring one provider to another inside a vdev.

I think that in Oracle 11.4 ZFS there's the possibility to remove top-level vdev from a pool, but for may years it was not possible.
I'm not sure if it's now possible to be done in OpenZFS. If it's possible you could have done this without destroying the pool.
 
Back
Top