ZFS Unable to attach disk to pool (zfs mirror)

Hello,

I was wondering if someone could help me attach a disk to a zpool. I'm confused and seem to be going around in circles.

Bash:
# zpool status
    NAME        STATE     READ WRITE CKSUM
    storage     ONLINE       0     0     0
      da1       ONLINE       0     0     0

Code:
# geom disk list
Geom name: da1
Providers:
1. Name: da1
   Mediasize: 4000752599040 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   descr: WD Elements 2620
   lunname: WD      Elements 2620   WX72DA0E7JA6
   lunid: WD      Elements 2620   WX72DA0E7JA6
   ident: 5758373244413045374A4136
   rotationrate: 5400
   fwsectors: 63
   fwheads: 255

Geom name: da2
Providers:
1. Name: da2
   Mediasize: 4000752599040 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: WD Elements 2620
   lunname: WD      Elements 2620   WXW2AB037037
   lunid: WD      Elements 2620   WXW2AB037037
   ident: 575857324142303337303337
   rotationrate: 5400
   fwsectors: 63
   fwheads: 255

Yep. They're two USB drives running on a raspberry pi. I know, it's not ideal but I wanted to see if it can be done. It was running for a while, then the disk (/dev/da2) started removing itself from the pool 'storage'.

Code:
# zpool attach storage 575857324142303337303337 da2
cannot attach da2 to 575857324142303337303337: no such device in pool

I'm not sure where to progress from here. Does anyone know?
 
The GEOM ident has nothing to do with ZFS. Just run the following command to create a mirror of da1 & da2 -

Code:
zpool attach storage da1 da2

You can use the ZFS disk guid, which is accessible in zdb -l device output, but there's no point. When attaching disks, just use whatever disk name appears in the status output. (If a disk has actually been physically removed, then ZFS may show its guid in the status output, as that guid will no longer have an active device name. It's at this point you'd usually have to resort to using the guid in zpool commands)

It was running for a while, then the disk (/dev/da2) started removing itself from the pool 'storage'.

This seems strange. I've never come across ZFS just losing disks. If it thinks a disk is inaccessible, it will usually appear in the status output as faulted or missing. The pool layout is stored on both disks, so a disk just 'disappearing' should be impossible.
 
  • Thanks
Reactions: c2n
Thank you. I'll wait for it to resilver and keep an eye on it in case its status changes to REMOVED again.

Code:
NAME        STATE     READ WRITE CKSUM
    storage     ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        da1     ONLINE       0     0     0
        da2     ONLINE       0     0     0

I think I'll brush up on the GEOM framework as well.
 
Nope, went back to REMOVED after a while

Code:
NAME        STATE     READ WRITE CKSUM
    storage     DEGRADED     0     0     0
      mirror-0  DEGRADED     0     0     0
        da1     ONLINE       0     0     0
        da2     REMOVED      0     0     0  (resilvering)

These two disks are connected to a USB hub that powered externally. They're not connected to the pi's internal USB ports (there's not enough power to drive two USB drives).

I wonder if either da2 is faulty or there's something it doesn't like with the hub.
 
I can't find product information for Western Digital Elements 2620. Can you link to a page for the product?

USB hub that powered externally

USB 2.0?

Share a probe, if you like:
  1. pkg install sysutils/hw-probe sysutils/hwstat sysutils/lsblk sysutils/pciutils sysutils/usbutils
  2. hw-probe -all -upload

From the opening post:

Bash:
# zpool status
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
da1 ONLINE 0 0 0

– no mirror-0 there; no REMOVED.



If the device in your case once disappeared without removal, this might be of interest: L2ARC: inexplicable disappearance, without removal, of cache device · Discussion #12519 · openzfs/zfs
 
Nope, went back to REMOVED after a while

Code:
NAME        STATE     READ WRITE CKSUM
    storage     DEGRADED     0     0     0
      mirror-0  DEGRADED     0     0     0
        da1     ONLINE       0     0     0
        da2     REMOVED      0     0     0  (resilvering)

These two disks are connected to a USB hub that powered externally. They're not connected to the pi's internal USB ports (there's not enough power to drive two USB drives).

I wonder if either da2 is faulty or there's something it doesn't like with the hub.
Check the output of dmesg and the contents of /var/log/messages to see why did the USB device get removed. It probably drew more power than the Pi could supply, my assumption.
My experience with Raspberry PI has been bad so far. I had 2 such devices that burned down in a matter of months, even without any USB attached to it.
I think it does not handle power very well so attaching USB disks to it would not be a good idea in my opinion, except for some quick test. The Pie is more a toy if you want to tinker with hardware or develop your own device.
If you want to use it seriously, look into devices like Zotac or Intel NUC. I have been using NUCs for years, they have been very reliable.

By the way, if it got removed and you attach it again, you can bring it back online like so: zpool online storage da2
 
Back
Top