Corrupted GPT on a RAID array.

My apologies if this topic has been covered elsewhere, however I have done some extensive research to no avail.

I'm trying to format a RAID6 array so that I start using it. The first of my problems is that when I run newfs according to the handbook, I receive the following kernel message:

Code:
May 26 09:45:39 cookiemonster kernel: GEOM: aacd0: the secondary GPT table is corrupt or invalid.
May 26 09:45:39 cookiemonster kernel: GEOM: aacd0: using the primary only -- recovery suggested.
May 26 09:45:39 cookiemonster kernel: GEOM: ufsid/53826774fcc99d95: the secondary GPT table is corrupt or invalid.
May 26 09:45:39 cookiemonster kernel: GEOM: ufsid/53826774fcc99d95: using the primary only -- recovery suggested.

Now from what I can gather, this is somewhat normal due to GPT's secondary tablet being stored at the end of the disk - which on a RAID array may confuse things. However, after I run gpart recover, as suggested all seems fine. However, this leads me onto my second issue:

Code:
root@cookiemonster:~ # gpart show
=>       34  488397101  ada0  GPT  (233G)
         34       1024     1  freebsd-boot  (512K)
       1058    8388608     2  freebsd-swap  (4.0G)
    8389666  480007469     3  freebsd-zfs  (229G)

=>         34  23404195773  aacd0  GPT  (11T)
           34          478         - free -  (239K)
          512  23404194816      1  freebsd-ufs  (11T)
  23404195328          479         - free -  (240K)

=>         34  23404195773  ufsid/53826774fcc99d95  GPT  (11T)
           34          478                          - free -  (239K)
          512  23404194816                       1  freebsd-ufs  (11T)
  23404195328          479                          - free -  (240K)

root@cookiemonster:~ #

the partition appears to be shown twice. Is this normal? Or is this because of some kind of corruption? I'm reluctant to start putting data onto this array until I'm sure that It's usable.

For the record, I'm using FreeBSD 10, and the disks are connected to an Adaptec 5805 RAID card, and the disks are all Seagate 2TB barracudas.

Any help is appreciated.
 
The double showing is just an artifact, and (I think) will go away when either of the two is mounted.

The GPT problem is more serious. Somehow the filesystem managed to overlay the secondary table, and that should not have happened. However, it depends on the command given to newfs(8). You're right, this should be fixed before using that space. Please show the exact command used.

The RAID hardware should hide any reserved space it uses for metadata on those disks, so that's not likely to be the problem.

Currently, the recommendation is to use hardware RAID controllers in JBOD mode and let ZFS deal with the disks directly. Otherwise, the hardware can hide disk problems from ZFS, which really is better at handling them. Think of it not so much as wasting an expensive RAID card as turning the whole computer into a very good, larger hardware RAID.
 
I had used the following commands to create the partitions:

Code:
gpart create -s GPT aacd0
gpart add -t freebsd-ufs aacd0
newfs -U /dev/aacd0p1

After newfs finishes, the messages about the partition table appear on the console.

I had intended to use ZFS, however when I set the disk to JBOD mode, not only does the controller still insist on writing some meta data to them, they simply won't appear in /dev, so unless I create 8 separate single disk arrays, I don't see a way around.
 
The corruption message would be from newfs(8) overwriting the secondary table. That could easily happen if it was given the drive rather than the partition:
newfs -U /dev/aacd0
rather than
newfs -U /dev/aacd0p1

For now, let's verify that by reformatting and repartitioning:
Code:
# gpart destroy -F aacd0
# gpart create -s GPT aacd0
# gpart add -t freebsd-ufs -a4k -b1m aacd0
# newfs -U /dev/aacd0p1

If that still overwrites the secondary GPT, it would be due to something the controller is doing. A workaround would be leaving a small amount of unused space at the end of the disk to protect the secondary GPT. 1M would be enough. So instead of the third step above,
Code:
# gpart add -t freebsd-ufs -a4k -b1m -s11534335m aacd0
That should be 11T minus 1M, if I did the math right. The secondary GPT is actually only 17K. The exact numbers should be worked out from that array. It would do very little harm to leave more space, and might be easier to calculate.

I have not used that controller, but have heard of others doing the workaround of making each disk an array. Maybe someone else knows an easier way.
 
wblock@ said:
For now, let's verify that by reformatting and repartitioning:
Code:
# gpart destroy -F aacd0
# gpart create -s GPT aacd0
# gpart add -t freebsd-ufs -a4k -b1m aacd0
# newfs -U /dev/aacd0p1

That appears to have done the trick, thank you. I did not originally use the -a4k and -b1m arguments, I assume therefore they have had some effect? Unless of course I had performed newfs on aacd0 instead of aacd0p1, but I was sure I hadn't.

Once again many thanks.
 
-a4k means "align to 4K blocks", which helps write speed on drives with 4K blocks.

-b1m means "start the partition at 1M", which is a semi-standard location that is compatible with other operating systems and some RAID system metadata.

Be sure to reboot and make sure there are no other warnings.

Personally, I would really want ZFS checksums on an 11T volume. Well, that and hardware independence. And snapshots. And compression.
 
Hello, we have exactly the same problems with Adaptec 7 series controllers. We have many brand new servers with and Adaptec ASR`s 71685, 71605, 78165. Sure, we use the lastest firmware version on each of them, and FreeBSD 10.0-RELEASE. Also we set "New" options in Adaptec controllers "SETCONTROLLERMODE" to "RAID: Hide RAW"

After system installation we get in the dmesg.boot log the following message:
Code:
aacraid0: <Adaptec RAID Controller> port 0xf000-0xf0ff mem 0xfbd00000-0xfbdfffff,0xfbe80000-0xfbe803ff irq 50 at device 0.0 on pci129
aacraid0: Enable Raw I/O
aacraid0: Enable 64-bit array
aacraid0: New comm. interface type2 enabled
aacraid0: ASR78165, aacraid driver 3.1.1-1
aacraidp0 on aacraid0
aacraidp1 on aacraid0
aacraidp2 on aacraid0
aacraidp3 on aacraid0
Looks like everything is fine, and we use 'driver 3.1.1-1' integrated into the system, and we can see aacraid0 add the device. There are no official drivers for FreeBSD 10 on the vendor site. Only for latest 9.2, that is horrible. The next we can see in dmesg.boot is the following:
Code:
da0 at aacraidp0 bus 0 scbus8 target 0 lun 0
da0: <Adaptec Array V1.0> Fixed Direct Access SCSI-4 device
da0: Serial Number 0000000000
da0: 300.000MB/s transfers
da0: Command Queueing enabled
da0: 1048585MB (2147502080 512 byte sectors: 255H 63S/T 133675C)
da1 at aacraidp1 bus 0 scbus9 target 24 lun 0
da1: <HGST HUS724040ALS640 A1C4> Fixed Direct Access SCSI-6 device (offline)
da1: Serial Number         PCH29B5X
da1: 300.000MB/s transfers
da1: Command Queueing enabled
da1: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)
da2  ... etc to the last drive in an array da10.
There is the first question - How can the driver expose individual disks in an array? The controller mode is "RAID: Hide RAW". Other modes also have been tested "RAID: Expose RAW/Auto Volume Mode/HBA Mode/Simple Volume Mode"? On the previous FreeBSD 9 everything is fine, and we could not see individual disks attached to arrays, that's normal.

The second question is about the following messages in dmesg.boot below: we have configured 2 LUNs and we can see it as DA storages in a system, but the system can also see all disks in an array so we have:
Code:
GEOM: da1: the secondary GPT table is corrupt or invalid.
GEOM: da1: using the primary only -- recovery suggested.
GEOM: da2: the secondary GPT table is corrupt or invalid.
GEOM: da2: using the primary only -- recovery suggested.
uhub1: 2 ports with 2 removable, self powered
uhub0: 2 ports with 2 removable, self powered
GEOM: diskid/DISK-%20%20%20%20%20%20%20%20PCH29B5X: the secondary GPT table is corrupt or invalid.
GEOM: diskid/DISK-%20%20%20%20%20%20%20%20PCH29B5X: using the primary only -- recovery suggested.
GEOM: diskid/DISK-%20%20%20%20%20%20%20%20PCH25ZGX: the secondary GPT table is corrupt or invalid.
GEOM: diskid/DISK-%20%20%20%20%20%20%20%20PCH25ZGX: using the primary only -- recovery suggested.
Code:
root@bl:/ # gpart show
=>        34  2147502013  da0  GPT  (1.0T)
          34         128    1  freebsd-boot  (64K)
         162         350       - free -  (175K)
         512    67108864    2  freebsd-ufs  (32G)
    67109376   536870912    3  freebsd-ufs  (256G)
   603980288    67108864    4  freebsd-ufs  (32G)
   671089152  1073741824    5  freebsd-ufs  (512G)
  1744830976   134217728    6  freebsd-swap  (64G)
  1879048704   268452864    7  freebsd-ufs  (128G)
  2147501568         479       - free -  (240K)

=>        34  2147502013  da1  GPT  (3.6T) [CORRUPT]
          34         128    1  freebsd-boot  (64K)
         162         350       - free -  (175K)
         512    67108864    2  freebsd-ufs  (32G)
    67109376   536870912    3  freebsd-ufs  (256G)
   603980288    67108864    4  freebsd-ufs  (32G)
   671089152  1073741824    5  freebsd-ufs  (512G)
  1744830976   134217728    6  freebsd-swap  (64G)
  1879048704   268452864    7  freebsd-ufs  (128G)
  2147501568         479       - free -  (240K)

=>        34  2147502013  da2  GPT  (3.6T) [CORRUPT]
          34         128    1  freebsd-boot  (64K)
         162         350       - free -  (175K)
         512    67108864    2  freebsd-ufs  (32G)
    67109376   536870912    3  freebsd-ufs  (256G)
   603980288    67108864    4  freebsd-ufs  (32G)
   671089152  1073741824    5  freebsd-ufs  (512G)
  1744830976   134217728    6  freebsd-swap  (64G)
  1879048704   268452864    7  freebsd-ufs  (128G)
  2147501568         479       - free -  (240K)
All configured servers have the same symptoms and same behaviour of the RAID controllers. To eliminate showing disks gptid and information in gpart we use in loader.conf:
Code:
kern.geom.label.gptid.enable=0
kern.geom.label.gpt.enable=0
kern.geom.label.disk_ident.enable=0
 
Is there any solution of my problem with ADAPTEC 7 series controllers.
Or in FreeBSD 10.1 the problem was fixed?
 
I encountered a new problem on the production system. This is really awful. When I try to expand an existing logical drive the system freezes and all existing da filesystems went down. The killing command is:

arcconf modify 1 from 2 to 262144 10 0 24 0 25 0 26 0 27 0 28 0 29 0 30 0 31 0 32 0 33

LogicalDrive 2 is not mounted yet. But aacraid0 takes a huge timeout and the server console shows the following:

Code:
aacraid0: COMMAND 0xfffffe0001076878 TIMEOUT AFTER 122 SECONDS
aacraid0: COMMAND 0xfffffe0001076b48 TIMEOUT AFTER 127 SECONDS
aacraid0: Warning! Controller is no longer running! code = 0xbcb00100
ZT2Riie.jpg


And the system fully crashes!
 
As above, possibly the disks were set up so that the RAID controller used hidden metadata at the end of the disk. The backup GPT was written to the end of the disk reported by the controller.

Then the controller mode was changed to show the raw disk, and now the backup GPT is not at the end of the disk.

If that was the problem, the solution is to repartition the disks in the raw mode.
 
Thanks for a fast reply. Yes this is a very big problem for me. Because i have several already configured and production servers with that.

On the system above i just test to create new logical drive and than expand it. But get whole system crashes. I need to expand existing LUN to more space.
I known that is would not be a problem on a Windows systems.

Even the system above was rebooted and goes to single user mode for disks checks.
I receive after idle on console:
Code:
aacraid0: COMMAND 0xfffffe000107b990 TIMEOUT AFTER 396 SECONDS
yEC59pK.jpg


Try to do the same:
Code:
arcconf create 1 logicaldrive 524288 10 0 24 0 25 0 26 0 27 0 28 0 29 0 30 0 31 0 32 0 3
arcconf MODIFY 1 FROM 2 TO 1048576 10 0 24 0 25 0 26 0 27 0 28 0 29 0 30 0 31 0 32 0 33

Reconfiguration of a logical device is a long process.  Are you sure you want to continue?
Press y, then ENTER to continue or press ENTER to abort: y

Reconfiguring logical device: LogicalDrv 2

System freezes instantly!
Vui6el6.jpg


After that i could not delete new LUN anymore from arcconf command. And in a controller BIOS this shows as:

NyEgKCu.jpg


Code:
RECONFIGURING - not running
Only what i can - is to delete disk from the adaptec BIOS.

I'm really afraid do not lose any data. But expanding existing LUN are necessary for me.
 
It's a strange situation. On the official Adaptec site there are no drivers for FreeBSD 10.0, only for 8, 9. I already asked on the official support about this. And they assured me that the latest drivers are included in a FreeBSD 10.0 system.
 
The Adaptec controllers (ASR7805) in my servers are also destroying the GPT backup entry. I think the controller firmware stores some metadata there which is quite dumb, in my opinion (especially when you don't want RAID, it should not write any metadata there). The solution is simply not touch anything. The system can run without the GPT backup. At least for me it has already run for some years without any corruption of data.

When I look at all the mess that Adaptec is doing in their firmware, next time I won't buy from them anymore. I really didn't want the RAID functionality they offer, but needed a cheap way to have 8 fast SATA channels. The problem is that Adaptec loves their RAID too much and does not offer proper unmanaged devices. For example it hides all except the boot harddrive during startup and prevents me to boot from ZFS RAID.

Adaptec seems to have many usability problems and is slightly dangerous to use, when I look what you all have produced there.
 
When a RAID controller stores metadata on a disk, it is supposed to hide that space. For example, if the controller uses 1M of space at the end of the disk for metadata, the computer with a 512M drive will only see it as 511M.

If the controller does not hide that space, it can be damaged or overwritten by applications. So either this is an incorrect setting, or a controller firmware bug.

If you have to work around it, use MBR partitioning and leave the last part of the drive where the metadata is stored unpartitioned, or create an empty partition there to protect it. GPT could also be used if you can ignore the warning about the corrupted backup table.
 
Nevertheless this is exactly the Adaptec driver problem. I made a test with a cleanly installed FreeBSD 9.2 system and the driver from the vendor's site. And everything works fine!

Driver version Ver. 7.5.0 Build 32028

All disks on the system show as aacdXX devices. Everything is fine with the GPT tables.

For a system with 7 x physical HDD's HITACHI HUS156030VLS600
Combined in 3 LUN's
Code:
root@ospine:/ # ll /dev/aac*
crw-r-----  1 root  operator  0x24 Nov  2 12:11 /dev/aac0
crw-r-----  1 root  operator  0x50 Nov  2 12:11 /dev/aacd0
crw-r-----  1 root  operator  0x5e Nov  2 12:11 /dev/aacd0p1
crw-r-----  1 root  operator  0x5f Nov  2 14:11 /dev/aacd0p2
crw-r-----  1 root  operator  0x60 Nov  2 14:11 /dev/aacd0p3
crw-r-----  1 root  operator  0x61 Nov  2 14:11 /dev/aacd0p4
crw-r-----  1 root  operator  0x62 Nov  2 12:11 /dev/aacd0p5
crw-r-----  1 root  operator  0x5c Nov  2 12:11 /dev/aacd1
crw-r-----  1 root  operator  0x63 Nov  2 14:11 /dev/aacd1p1
crw-r-----  1 root  operator  0x5d Nov  2 12:11 /dev/aacd2
crw-r-----  1 root  operator  0x64 Nov  2 14:11 /dev/aacd2p1
In dmesg.boot all disks show as:
Code:
ses0 at aacp2 bus 0 scbus2 target 0 lun 0
ses0: <ADAPTEC Virtual SGPIO  0 0001> Fixed Enclosure Services SCSI-5 device
ses0: 3.300MB/s transfers
ses0: SCSI-3 ENC Device
ses1 at aacp2 bus 0 scbus2 target 1 lun 0
ses1: <ADAPTEC Virtual SGPIO  1 0001> Fixed Enclosure Services SCSI-5 device
ses1: 3.300MB/s transfers
ses1: SCSI-3 ENC Device
pass0 at aacp0 bus 0 scbus0 target 0 lun 0
pass0: <HITACHI HUS156030VLS600 A5D0> Fixed Uninstalled SCSI-6 device
pass0: 3.300MB/s transfers
All information about the metadata of the array are hidden from the system. And could not be shown in any way. Regarding LUN's sizes there are internally aligning mechanisms. So when I create a LUN of 256 GB size. Controller always corrected it to the

Code:
Logical device number 1
Parity space                             : 262143 MB ( 255,999 GB )

But gpart shows it as:

Code:
Providers:
1. Name: aacd1p1
    Mediasize: 274876824064 (256G)

But in a case of SMP system architectures, and many cores on CPU - up to 12.
FreeBSD 10.0 shows 4x UP more performance versus 9.2. So we must use FreeBSD 10.0.
 
To compare: analogue system with FreeBSD 10 installed.

System with 10 x physical HDD's HITACHI HUS724040ALS640
Combined into 4 x LUN's

dmesg.boot:
Code:
da0 at aacraidp0 bus 0 scbus8 target 0 lun 0
da0: <Adaptec Array V1.0> Fixed Direct Access SCSI-4 device
da0: Serial Number 0000000000
da0: 300.000MB/s transfers
da0: Command Queueing enabled
da0: 1048585MB (2147502080 512 byte sectors: 255H 63S/T 133675C)
da1 at aacraidp0 bus 0 scbus8 target 1 lun 0
da1: <Adaptec Array V1.0> Fixed Direct Access SCSI-4 device
da1: Serial Number 0000000101
da1: 300.000MB/s transfers
da1: Command Queueing enabled
da1: 4194315MB (8589957120 512 byte sectors: 255H 63S/T 534700C)
da2 at aacraidp0 bus 0 scbus8 target 2 lun 0
da2: <Adaptec Array V1.0> Fixed Direct Access SCSI-4 device
da2: Serial Number 0000000202
da2: 300.000MB/s transfers
da2: Command Queueing enabled
da2: 6291455MB (12884899840 512 byte sectors: 255H 63S/T 802047C)
da3 at aacraidp0 bus 0 scbus8 target 3 lun 0
da3: <Adaptec Array V1.0> Fixed Direct Access SCSI-4 device
da3: Serial Number 0000000303
da3: 300.000MB/s transfers
da3: Command Queueing enabled
da3: 7537645MB (15437096960 512 byte sectors: 255H 63S/T 960914C)
da4 at aacraidp1 bus 0 scbus9 target 24 lun 0
da4: <HGST HUS724040ALS640 A1C4> Fixed Direct Access SCSI-6 device (offline)
da4: Serial Number         PCH29B5X
da4: 300.000MB/s transfers
da4: Command Queueing enabled
da4: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)
da5 at aacraidp1 bus 0 scbus9 target 25 lun 0
da5: <HGST HUS724040ALS640 A1C4> Fixed Direct Access SCSI-6 device (offline)
da5: Serial Number         PCH25ZGX
da5: 300.000MB/s transfers
da5: Command Queueing enabled
da5: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)
da6 at aacraidp1 bus 0 scbus9 target 26 lun 0
da6: <HGST HUS724040ALS640 A1C4> Fixed Direct Access SCSI-6 device (offline)
da6: Serial Number         PCH264DX
da6: 300.000MB/s transfers
da6: Command Queueing enabled
da6: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)
da7 at aacraidp1 bus 0 scbus9 target 27 lun 0
da7: <HGST HUS724040ALS640 A1C4> Fixed Direct Access SCSI-6 device (offline)
da7: Serial Number         PCH2984X
da7: 300.000MB/s transfers
da7: Command Queueing enabled
da7: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)
da8 at aacraidp1 bus 0 scbus9 target 28 lun 0
da8: <HGST HUS724040ALS640 A1C4> Fixed Direct Access SCSI-6 device (offline)
da8: Serial Number         PCH25YXX
da8: 300.000MB/s transfers
da8: Command Queueing enabled
da8: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)
da9 at aacraidp1 bus 0 scbus9 target 29 lun 0
da9: <HGST HUS724040ALS640 A1C4> Fixed Direct Access SCSI-6 device (offline)
da9: Serial Number         PCH1TWHX
da9: 300.000MB/s transfers
da9: Command Queueing enabled
da9: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)

Can you see this? Disks from da0 to da3 reflect LUNs created in an array. But what are disks from da4 to da9, and why is it only six disks regarding to physically installed 10? And the total number of disks reflected in dmesg is exactly 10 - da0 to da9.

But some of them are "Adaptec Array V1.0" and others "HGST HUSXX" and marked as "offline".
 
Back
Top