ZFS GEOM: mfid1: corrupt or invalid GPT detected

I have a dell poweredge 2900 III with a Dell PERC 6 Megaraid SAS driver Ver 3.00

On a raid1 (bootable) I have freebsd 7.1 latest stable version with default partion scheme and UFS, and on the raid5 I have a '/tank' partition made with ZFS

The system works fine the only problem is that i keep geting this messages.

Code:
GEOM: mfid1: corrupt or invalid GPT detected.
GEOM: mfid1: GPT rejected -- may not be recoverable.
Any idea on how to solve this or at least stop login that msg ?

regards.
 
[solved] remove mbr from disk

removing the mbr from the disk worked, what I did was to rebuild the array and later only did:

Code:
zpool create tank /dev/mfid1
zfs create tank/jails

the first time I tried to using gpt:
gpt create -f /dev/mfid1 but for some reason had problems and I was geting all the time the mfid1: corrupt or invalid GPT detected.

removing the mbr/pbmr and later only recreating the zfs pool worked.
 
Reviving thos ol' thread, as I have a very similar problem, and would like to try your solution if it could be described in more detail.

I have a fresh install of 7.2, and have imported the ZFS-pool I made on the previous FreeBSD 7.1 installation.

I have three 1.5TB drives connected to the main board, and made a pool with RAIDZ running on all three drives. The syste is on a seperate HD.

After importing, I get these errors on all the drives at bootup, when it is mounting the pool.

Code:
GEOM: adX: corrupt or invalid GPT detected.
GEOM: adX: GPT rejected -- may not be recoverable.

and sometimes

Code:
GEOM: adX: the primary GPT table is corrupt or invalid.
GEOM: adX: using the secondary only -- recovery suggested.

I have tried to delete the GPT and rewrite it on all disks, to no avail. This might not have been a wise aproach - I don't know.

I would prefer to not have to backup the data and restore after creating a new pool. Although the important stuff is always backed up, there are quite a few hundred gigs not backed up, and backing up/restoring is tedious.

Thanks in advance for any help, and don't hesitate to ask for more information if i didn't provide enough.
 
ZFS doesn't always play nicely with MBR or GPT partition tables when you use the entire disk for ZFS. There's some overlap in the ZFS on-disk formatting and the partition table. This is a known issue with FreeBSD (see the -stable and -current mailing lists for more details).

When using the entire disk for ZFS, don't put any kind of partition table on it, whatsoever. Just let ZFS have the entire, raw, empty disk to use.

You can remove the disk from the pool, zero out the entire disk, and then re-add it to the pool, and this "error" will disappear. You can also just zero out the partition table to achieve the same result.
 
phoenix said:
You can also just zero out the partition table to achieve the same result.
Thanks.
Would "gpt destroy" accomplish this? After reading the mailing list, it seemes this is also something I could just ignore?
 
im getting the same problem but im not understanding the solution.
im getting it by importing an already existing mirror from open solaris which was given the whole disk (v13 zfs)
should i see the mailing list ?
 
Does anyone else have a better solution than to zero out the disk? I can't zero it out because if I do that then my highpoint 2680 raid card doesn't pass it through to bsd, it only lets me add it to a raid array. Since I use ZFS, I need to format it in another computer for it to show up as "legacy" in the raid card, so that BSD can see it.

It only happened on the last 4 disks I added to a 12 disk (3x (raidz1 x4)) pool.


It gives me the
Code:
GEOM: adX: the primary GPT table is corrupt or invalid.
GEOM: adX: using the secondary only -- recovery suggested.
message and works fine, but it warns me on both export and import. If "recovery suggested" then how do you "recover"?
 
My layout:
Code:
=>        34  2930277101  ada0  GPT  (1.4T)
          34         128     1  freebsd-boot  (64K)
         162        1886        - free -  (943K)
        2048     1048576     2  freebsd-swap  (512M)
     1050624  2929226511     3  freebsd-zfs  (1.4T)

I started getting the "Primary GPT table corrupt,using secondary; Recovery adviced" error.
I run a gpt-partitioned zfs-only installation.
The error kept on coming through several reboots (so it wasn't a random fluke).

Then I "re-installed" the bootcode on the affected harddrive:
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 DEVICENAME

rebooted, and the error didn't show.
 
Here is a basic HOWTO of what exactly you do, since the above posts are too high level for FreeBSD beginners.

This is the url I went to, to figure out how far I really need to write, since writing zeros to a whole 3TB disk would take many hours.
http://www.linuxquestions.org/questions/linux-newbie-8/using-dd-to-zero-the-mbr-query-606489/

Before you start, you need to tell the kernel to not stop you from potentially destroying your data. ;) Without doing this, your dd command will result in an "Operation not permitted" message.
# sysctl kern.geom.debugflags=0x10
And the output of that command:
Code:
        kern.geom.debugflags: 0 -> 16


Then, not sure if necessary, I exported my pool (and observed the warning messages in /var/log/messages).
# zpool export tank

(skip this step and do the next instead) Then on my not-yet-production-server, I did the following as suggested by that page, expecting my zfs to be corrupt (beware, maybe it will be corrupt):
# dd if=/dev/zero of=/dev/label/spare1 bs=446 count=1

But got some warning messages (in /var/log/messages). So I then did the following instead (to all disks, not just spares):
# dd if=/dev/zero of=/dev/label/spare1 bs=512 count=1

And I didn't get any warning messages. So I reimported my pool:
# zpool import tank

At this point, my zfs pool does not appear to be corrupt. I ran a scrub, and zpool status still shows up clear of errors. (Although I can't say for sure that there is no hidden problem ...)

Now let's undo that kernel parameter change:
# sysctl kern.geom.debugflags=0x0
 
Back
Top