Other gmultipath state degraded

Hi guys

I'm doing tests labeling da* and md* units, but I don't understand why they are all in a degraded state.

The disks are completely blank without any type of scheme, I apply the following to label them with the serial number:

camcontrol inquiry da0 -S | xargs -J % -n1 gmultipath label % /dev/da0

The label is added without any error, but when checking the list and status with gmultipath, it appears in a degraded state, same with md devices:

Code:
Name    Status  Components
multipath/E0D55E6D6638F39  DEGRADED  da0 (ACTIVE)

Code:
Geom name: E0D55E6D6638F39
Type: AUTOMATIC
Mode: Active/Passive
UUID: 52066dd4-e1b5-11ed-820d-28d2443cb776
State: DEGRADED
Providers:
1. Name: multipath/E0D55E6D6638F39
   Mediasize: 30943995392 (29G)
   Sectorsize: 512
   Mode: r0w0e0
   State: DEGRADED
Consumers:
1. Name: da0
   Mediasize: 30943995904 (29G)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE

I don't know what I did wrong.

Thanks.
 
Never used gmultipath on FreeBSD, but in other OS having only one path to multipathed device would report it as degraded as well (example below). Have you tried adding more paths?

Code:
genunix: sd2 online
genunix: sd2: multipath status: degraded: path 3 mpt_sas35/disk@w5000c500a052d6b1,0 is online
...
genunix: sd2: multipath status: optimal: path 26 mpt_sas36/disk@w5000c500a052d6b2,0 is online; load balancing: logical-block, region-size: 18
 
I have tried to do what it told me, I have only been able to add more disks manually, I have not been able to automatically, and correct the degraded state disappeared.

But I don't need to include more disk to a label just 1. I'm doing ZFS exercises, I use virtual disks like MD to do them. One of the exercises that I have put myself is to assign labels to the MD discs, I have read that it is recommended to label the discs.

Let me show you my procedure.

Code:
###Create disk´s
truncate -s1g disk{0,1,2,3,4]

 ###Create md devices in previous disk
ls disk* | xargs -J % -n1 mdconfig -a -t vnode -f %

###Create manual label                                                           
gmutipath create labeltest /dev/md{0,1,2,3,4}

Ok I get the following:

Code:
Geom name: labeltest
Type: MANUAL
Mode: Active/Passive
UUID: (null)
State: OPTIMAL
Providers:
1. Name: multipath/labeltest
   Mediasize: 1073741824 (1.0G)
   Sectorsize: 512
   Mode: r1w1e1
   State: OPTIMAL
Consumers:
1. Name: md1
   Mediasize: 1073741824 (1.0G)
   Sectorsize: 512
   Mode: r2w2e2
   State: PASSIVE
2. Name: md2
   Mediasize: 1073741824 (1.0G)
   Sectorsize: 512
   Mode: r2w2e2
   State: PASSIVE
3. Name: md3
   Mediasize: 1073741824 (1.0G)
   Sectorsize: 512
   Mode: r2w2e2
   State: PASSIVE
4. Name: md4
   Mediasize: 1073741824 (1.0G)
   Sectorsize: 512
   Mode: r2w2e2
   State: ACTIVE

I create a pool above the path:

zpool create mdtest /dev/multipath/labeltest

Code:
zpool: mdpool
 state: ONLINE
config:

    NAME                   STATE     READ WRITE CKSUM
    mdpool                 ONLINE       0     0     0
      multipath/labeltest  ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      ada0p3    ONLINE       0     0     0

errors: No known data errors

Well, I try to simulate a disk error, or an extraction from it:

mdconfig -a -t vnode -u md0 -o force

I get the following kernel warning on my main tty:

Code:
GEOM_MULTIPATH: md0 in labeltest was disconnected
GEOM_MULTIPATH: md4 is now active path in labeltest
GEOM_MULTIPATH: md0 removed from labeltest

And this is the status of gmultipath:

Code:
Name   Status  Components
multipath/labeltest  OPTIMAL  md1 (PASSIVE)
                              md2 (PASSIVE)
                              md3 (PASSIVE)
                              md4 (ACTIVE)

Well I can understand that the degraded state is because there is no other disk pointing to that path, right?

But I don't need to do everything I put above, just mark a path to each disk, I suppose that in automatic mode, as I said in my first post, it would be correct, even if it is degraded?

Thanks.
 
This is how I currently have the configuration:

rc.conf
mdconfig_md0="-t vnode -f /root/disk0"
mdconfig_md1="-t vnode -f /root/disk1"
mdconfig_md2="-t vnode -f /root/disk2"
mdconfig_md3="-t vnode -f /root/disk3"
mdconfig_md4="-t vnode -f /root/disk4"
mdconfig_md5="-t vnode -f /root/disk5"

zpool status

pool: xmz0
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
xmz0 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
multipath/m0 ONLINE 0 0 0
multipath/m1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
multipath/m2 ONLINE 0 0 0
multipath/m3 ONLINE 0 0 0
logs
multipath/m4 ONLINE 0 0 0
multipath/m5 ONLINE 0 0 0

errors: No known data errors

gmultipath status

Name Status Components
multipath/m0 DEGRADED md0 (ACTIVE)
multipath/m1 DEGRADED md1 (ACTIVE)
multipath/m2 DEGRADED md2 (ACTIVE)
multipath/m3 DEGRADED md3 (ACTIVE)
multipath/m4 DEGRADED md4 (ACTIVE)
multipath/m5 DEGRADED md5 (ACTIVE)

As I said in my previous comment, I only need each label to point to a disk, if it can be contradictory, since the very name of the command is indicated by gmultipath.

I suppose that the label, since it does not have more routes, remains in a degraded state, I suppose that I am not doing anything wrong, and the procedure is correct, I am also going to read more about labeling.

Thanks guys.
 
Back
Top