Other How to destroy a graid3 array completely?

Hi folks,

I have created (labeled) a graid3 array of three disks exactly like in the examples section of the graid3 command man page. Now, I wanted to destroy the array but neither stop nor clear do anything useful. The array still shows up with graid3 show as well as in /dev/raid3/data.

How can I properly get rid of this array and create another one?
 
Did you definitely run the clear command on the three disks that were in the array?
I've just tested on my machine and it works as expected:

Code:
# kldload geom_raid3
# graid3 status
          Name    Status  Components
raid3/testraid  COMPLETE  md3 (ACTIVE)
                          md2 (ACTIVE)
                          md1 (ACTIVE)
# graid3 stop testraid
# graid3 status
# graid3 clear md1 md2 md3
# kldunload geom_raid3
# kldload geom_raid3
# graid3 status
#

The alternative would be to clear the array metadata manually. It's bound to be at the start or end of the disk so clearing a few MB at each end will probably do it.
 
I first ran stop and then clear which says "Operation not permitted". Logfile says about stop:

Code:
GEOM_RAID3: Device data: provider raid3/data destroyed.
GEOM_RAID3: Device data destroyed.
GEOM_RAID3: Device raid3/data launched (3/3).

Somehow the provider is immediately restarted after stop. graid3 unload says "Device is busy" though data is not mounted at all. All what stop does it is to replace ada0 to ada2 in status with their corresponding diskids. Is there a way to debug what is actually happening?
 
I have tried to replay this in a virtual machine:

Code:
root@freebsd-mika:~ # graid3 load
root@freebsd-mika:~ # graid3 status
      Name    Status  Components
raid3/data  COMPLETE  ada5 (ACTIVE)
                      ada4 (ACTIVE)
                      ada3 (ACTIVE)

So far, so good.

Code:
root@freebsd-mika:~ # graid3 stop data
root@freebsd-mika:~ # cat /var/log/messages

Oct  4 11:43:44 freebsd-mika kernel: GEOM_RAID3: Device data: provider raid3/data destroyed.
Oct  4 11:43:44 freebsd-mika kernel: GEOM_RAID3: Device data destroyed.
Oct  4 11:43:44 freebsd-mika kernel: GEOM_RAID3: Device raid3/data launched (3/3).
root@freebsd-mika:~ # graid3 clear ada3 ada4 ada5
Can't clear metadata on ada3: Operation not permitted.
Can't clear metadata on ada4: Operation not permitted.
Can't clear metadata on ada5: Operation not permitted.
graid3: Not fully done.
root@freebsd-mika:~ # graid3 status
      Name    Status  Components
raid3/data  COMPLETE  diskid/DISK-VB964bfeac-58db8d25 (ACTIVE)
                      diskid/DISK-VB6d86b1d8-71eecdea (ACTIVE)
                      diskid/DISK-VBdd1d7659-c06fae71 (ACTIVE)

This is exactly happening on my home machine. Currently I am lost.
 
It looks like it's attaching the array again immediately after you stop it. Have you unmounted any filesystems on the array before trying to stop it?

You could also try booting without the geom_raid3 module loaded (unless you compiled it in) and see if it will let you clear the metadata then.
 
Yes, all filesystems have been unmounted:

Code:
root@freebsd-mika:~ # mount
/dev/ada0p2 on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
fdescfs on /dev/fd (fdescfs)
procfs on /proc (procfs, local)
root@freebsd-mika:~ # graid3 status
      Name    Status  Components
raid3/data  COMPLETE  diskid/DISK-VB964bfeac-58db8d25 (ACTIVE)
                      diskid/DISK-VB6d86b1d8-71eecdea (ACTIVE)
                      diskid/DISK-VBdd1d7659-c06fae71 (ACTIVE)
root@freebsd-mika:~ # graid3 stop data
root@freebsd-mika:~ # graid3 clear ada3 ada4 ada5
Can't clear metadata on ada3: Operation not permitted.
Can't clear metadata on ada4: Operation not permitted.
Can't clear metadata on ada5: Operation not permitted.

After rebooting the server, I see that data is immediate started with graid3 load. Stop is impossible. Another reboot and then clear did it. Though, it does not seem right to me. Restarting a productive machine just to disable an array is wrong.

The funny thing is that it happens with gmirror too. After a stop the device is automatically relaunched. This somewhat ridiculous.
Both virtual machine and home machine are i386. I will retry in a 64 bit VM.

Edit: Now quickly installed 10.3-RELEASE 64 bit in a virtual machine. Same issue, the device is automatically relaunched. graid3 unload still says device is busy.
Edit 2: Same stuff on 11.0-RC3 64 bit.
 
Another note: I tried gstripe, gconcat and gvinum RAID5 and it was easily possible to destroy/stop them. Though, except gconcat they all had abysmal write performance. From 50 to 60 MB/s, it dropped to 100 kB/s (!).

I will perform write tests on raw disks. If this is the case, either my IDE controller, the driver or geom classes are broken.
 
Sorry to dig up old topic, but I have exactly the same problem with graid3 in 11.2. Cannot stop or clear the array:

Code:
# graid3 status
      Name    Status  Components
raid3/suur  COMPLETE  ada5 (ACTIVE)
                      ada4 (ACTIVE)
                      ada3 (ACTIVE)
                      ada2 (ACTIVE)
                      ada1 (ACTIVE)
# graid3 stop suur
# graid3 status
      Name    Status  Components
raid3/suur  COMPLETE  diskid/DISK-944GMUEGS (ACTIVE)
                      diskid/DISK-WD-WMC4N1205869 (ACTIVE)
                      diskid/DISK-14KMEVBGS (ACTIVE)
                      diskid/DISK-56S7N1YAS (ACTIVE)
                      diskid/DISK-X4VTL61GS (ACTIVE)
# graid3 stop suur
# graid3 status
      Name    Status  Components
raid3/suur  COMPLETE  ada1 (ACTIVE)
                      ada2 (ACTIVE)
                      ada3 (ACTIVE)
                      ada4 (ACTIVE)
                      ada5 (ACTIVE)

Stopping the array immediately launches it again. Clearing the array gives device busy.

I am at the moment trying the workaround to booting without geom_raid3_load="YES" in /boot/loader.conf and now wiping all 5 disks with dd.
 
Still had the same problem in 2020 and with 12.2.

I could remove one disk from graid3 array, but after that no more. So went the dd route again...
 
Back
Top