Solved Help needed with issue removing idle dirve

I have 4 drives in this system, one SSD (ada3) not in use.

When trying to physically disconnect this drive (ada3), the boot ends with error:

error.jpg


Physically reconnecting this drive brings the system back to normal.

I am clearly misunderstanding here.

Any ideas?
 
Connect the drive.
Then the basic questions:
zpool list -v
contents of /etc/fstab
&
Code:
cat /boot/loader.conf | grep -i zfs
I have only ada0, ada1 and ada2 in use. ada3 is idle, but I cannot physically remove. The error looks nasty.

Code:
root@Rhodium ~# zpool list -v
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
kelder      1.80T   616G  1.20T        -         -    18%    33%  1.00x    ONLINE  -
  mirror    1.80T   616G  1.20T        -         -    18%  33.4%      -  ONLINE
    ada1p3      -      -      -        -         -      -      -      -  ONLINE
    ada0p3      -      -      -        -         -      -      -      -  ONLINE
logs            -      -      -        -         -      -      -      -  -
  ada2p4    15.5G  27.6M  15.5G        -         -     0%  0.17%      -  ONLINE
cache           -      -      -        -         -      -      -      -  -
  ada2p3    64.0G  63.6G   415M        -         -     0%  99.4%      -  ONLINE

fstab is empty, only proc:

Code:
root@Rhodium ~# cat /etc/fstab
# Device                Mountpoint      FStype  Options         Dump    Pass#
## /dev/ada3p2          none            swap    sw              0       0
## /dev/ada2p2          none            swap    sw              0       0
proc                    /proc           procfs  rw              0       0


Code:
geom disk list
Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   descr: ST2000DM008-2FR102
   lunid: 5000c500c9759a2f
   ident: ZFL414RC
   rotationrate: 7200
   fwsectors: 63
   fwheads: 16

Geom name: ada1
Providers:
1. Name: ada1
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   descr: TOSHIBA DT01ACA200
   lunid: 5000039ff3e7a699
   ident: Y3RU709KS
   rotationrate: 7200
   fwsectors: 63
   fwheads: 16

Geom name: ada2
Providers:
1. Name: ada2
   Mediasize: 128035676160 (119G)
   Sectorsize: 512
   Mode: r2w2e4
   descr: Apacer AS350 128GB
   ident: EB8F079A1B2F00367398
   rotationrate: 0
   fwsectors: 63
   fwheads: 16

Geom name: ada3
Providers:
1. Name: ada3
   Mediasize: 128035676160 (119G)
   Sectorsize: 512
   Mode: r0w0e0
   descr: Apacer AS350 128GB
   ident: 149907061A0100064235
   rotationrate: 0
   fwsectors: 63
   fwheads: 16

Geom name: da0
Providers:
1. Name: da0
   Mediasize: 0 (0B)
   Sectorsize: 512
   Mode: r0w0e0
   descr: Multiple Card  Reader
   ident: 058F63666433
   rotationrate: unknown
   fwsectors: 0
   fwheads: 0


Code:
cat /boot/loader.conf | grep -i zfs
## zfs_load="YES"
openzfs_load="YES"
vfs.zfs.prefetch_disable="1"
 
try to set

vfs.root.mountfrom="zfs:xxxpool/xxxdataset" in loader.conf
if does not help try to nuke the zfs cache file and recreate it
zpool set cachefile=/boot/zfs/zpool.cache your_pool
 
In loader.conf you can hint where to load the kernel from:
Code:
### Selects the default device to loader the kernel from
currdev="zfs:myzpool/ROOT/default:"
 
Did several tricks at the same time and it is OK now.
  1. recreated the /boot/zfs/zpool.cache
  2. removed the logs device from pool
After that I was able to remove the physical disk. Still, this is without explanation to me, but it is OK now.
 
Back
Top