ZFS GPT table corrupt

I have a NAS running FreeBSD 10.1 with 3 disks: ada2 is the boot device, ada0 and ada1 are a ZFS mirror.

dmesg shows this for the ZFS mirror:
Code:
GEOM: ada0: the primary GPT table is corrupt or invalid.
GEOM: ada0: using the secondary instead -- recovery strongly advised.

GEOM: ada1: the primary GPT table is corrupt or invalid.
GEOM: ada1: using the secondary instead -- recovery strongly advised.
I tried to recover the GPT tables with gpart recover but gpart(8) won't find the ZFS mirror.

gpart show result (finds only the boot disk):
Code:
=>  34  625142381  ada2  GPT  (298G)
       34       1024     1  freebsd-boot  (512K)
      1058    8388608     2  freebsd-swap  (4.0G)
    8389666  616752749     3  freebsd-zfs  (294G)
gpart list result:
Code:
Geom name: ada2
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 625142414
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada2p1
  Mediasize: 524288 (512K)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 17408
  Mode: r0w0e0
  rawuuid: 781290a4-8fb8-11e4-b9cd-9cb65404591a
  rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
  label: gptboot0
  length: 524288
  offset: 17408
  type: freebsd-boot
  index: 1
  end: 1057
  start: 34
2. Name: ada2p2
  Mediasize: 4294967296 (4.0G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 541696
  Mode: r1w1e0
  rawuuid: 782b183f-8fb8-11e4-b9cd-9cb65404591a
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: swap0
  length: 4294967296
  offset: 541696
  type: freebsd-swap
  index: 2
  end: 8389665
  start: 1058
3. Name: ada2p3
  Mediasize: 315777407488 (294G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 541696
  Mode: r1w1e2
  rawuuid: 78436d97-8fb8-11e4-b9cd-9cb65404591a
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: zfs0
  length: 315777407488
  offset: 4295508992
  type: freebsd-zfs
  index: 3
  end: 625142414
  start: 8389666
Consumers:
1. Name: ada2
  Mediasize: 320072933376 (298G)
  Sectorsize: 512
  Mode: r2w2e4
How can I recover the primary GPT tables for ada0 and ada1?
 
Last edited by a moderator:
Please post the output of zpool status. If ZFS uses the whole disks there won't be a GPT table (or any other partition scheme) at all.
 
zpool status
Code:
  pool: tank
 state: ONLINE
  scan: scrub repaired 0 in 6h9m with 0 errors on Fri Jun 19 05:54:05 2015
config:


        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0


errors: No known data errors
And yes, ZFS uses the whole disks.
 
Should I just ignore that error message about the GPT tables? So even the secondary GPT tables aren't used, correct?
 
You create a pool of directly to device ada0 ada1 - this led to the fact that the pool was created using the entire device (ignore partitions!) not the partition on it (to use every device possible! it is necessary for others to organize the loading of the OS!) and, of course, information about GPT was destroyed.

You should have created a zfs pool:
(for striped) zpool create tank ada0p3 ada1p3
(for mirror) zpool create tank mirror ada0p3 ada1p3
If You set the label in the command gpart add ... -l zdisk0 ada0 , so:
(for mirror) zpool create tank mirror /dev/gpt/zdisk0 /dev/gpt/zdisk1

Should be you case:
zpool status -v

Code:
zpool status -v
  pool: tank
state: ONLINE
  scan: scrub repaired 0 in 1h11m with 0 errors on Tue Jun 23 11:30:01 2015
config:

        NAME            STATE     READ WRITE CKSUM
        tank          ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            ada0p3     ONLINE       0     0     0
            ada0p3     ONLINE       0     0     0

errors: No known data errors
or if use labels:
Code:
zpool status -v
  pool: tank
state: ONLINE
  scan: scrub repaired 0 in 1h11m with 0 errors on Tue Jun 23 11:30:01 2015
config:

        NAME            STATE     READ WRITE CKSUM
        tank          ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            gpt/zdisk0  ONLINE       0     0     0
            gpt/zdisk1  ONLINE       0     0     0

errors: No known data errors
Srečno.
 
Because disks can vary in exact size, ZFS leaves some space unused at the end of the disk. (I don't know of an easy way to find out how much. Somebody pointed me at the source once, but I can't find that now. I think it would have to be at least a megabyte to allow for disk variance, but that is an estimate.)

The backup copy of the GPT is stored at the very end of the disk. The boot code tries to verify GPT tables, and is likely finding that leftover backup GPT at the end of the disk.

The trick is clearing that backup GPT without damaging the ZFS data. Do not attempt to do that without a full, verified backup of that ZFS mirror. After that, use diskinfo -v ada0 to get the mediasize in sectors. The standard backup GPT is 33 blocks long, so erasing the last 33 blocks on the disk with dd(1) should be enough to avoid the error without interfering with the ZFS data. dd(1) does not have a way to say "the last n blocks", so the seek= option has to be used to seek to (mediasize in blocks - 33).

WARNING: make a full, verified backup of everything on the disk first!
Example with a 250G drive:

Code:
% diskinfo -v ada0 | grep 'mediasize in sectors'
   500118192     # mediasize in sectors
500118192 - 33 = 500118159
Code:
dd if=/dev/zero of=/dev/ada0 bs=512 count=33 seek=500118159
33+0 records in
33+0 records out
16896 bytes transferred in 0.064181 secs (263255 bytes/sec)

Repeat the procedure for ada1. Do not just reuse the same dd(1) command because the two disks might not have identical block counts.

In the future, the easy way is to erase GPT metadata before reusing the disk. That can be done with gpart destroy (see gpart(8)).
 
Ok, I get it now.
I have manuals (manuals, unfortunately, in russian, but you see the commands) how to install the OS on ZFS pool
with GPT:
http://wcsn.livejournal.com/#post-wcsn-1832
and how to do without GPT ...
http://wcsn.livejournal.com/#post-wcsn-2246
All commands in the script. (I use this for install OS with "zero"):
http://wcsn.livejournal.com/#post-wcsn-2442
Change for your situation ... and all :).
(copy-paste with livejournal can distort the text of the script mb insert artefacts)

Srečno. :)
 
Because disks can vary in exact size, ZFS leaves some space unused at the end of the disk. (I don't know of an easy way to find out how much. Somebody pointed me at the source once, but I can't find that now. I think it would have to be at least a megabyte to allow for disk variance, but that is an estimate.)

The backup copy of the GPT is stored at the very end of the disk. The boot code tries to verify GPT tables, and is likely finding that leftover backup GPT at the end of the disk.

The trick is clearing that backup GPT without damaging the ZFS data. Do not attempt to do that without a full, verified backup of that ZFS mirror. After that, use diskinfo -v ada0 to get the mediasize in sectors. The standard backup GPT is 33 blocks long, so erasing the last 33 blocks on the disk with dd(1) should be enough to avoid the error without interfering with the ZFS data. dd(1) does not have a way to say "the last n blocks", so the seek= option has to be used to seek to (mediasize in blocks - 33).

WARNING: make a full, verified backup of everything on the disk first!
Example with a 250G drive:

Code:
% diskinfo -v ada0 | grep 'mediasize in sectors'
   500118192     # mediasize in sectors
500118192 - 33 = 500118159
Code:
dd if=/dev/zero of=/dev/ada0 bs=512 count=33 seek=500118159
33+0 records in
33+0 records out
16896 bytes transferred in 0.064181 secs (263255 bytes/sec)

Repeat the procedure for ada1. Do not just reuse the same dd(1) command because the two disks might not have identical block counts.

In the future, the easy way is to erase GPT metadata before reusing the disk. That can be done with gpart destroy (see gpart(8)).[/cmd][/cmd]

Thanks for the detailed information, it helped me to understand the whole issue.

What is the correct way (commands) to add a new drive to the existing mirror? I want to resilver the new drive and detach ada0 and ada1 from the pool and fix the GPT code on them or even create a new ZFS FS.
 
Ideally you want to zero the whole disk but with large disks this is going to take forever. At the very least make sure there's no partition scheme on the disks before adding them. If there is a gpart destroy adaX should clear it.
 
The trick is clearing that backup GPT without damaging the ZFS data. Do not attempt to do that without a full, verified backup of that ZFS mirror. After that, use diskinfo -v ada0 to get the mediasize in sectors. The standard backup GPT is 33 blocks long, so erasing the last 33 blocks on the disk with dd(1) should be enough to avoid the error without interfering with the ZFS data. dd(1) does not have a way to say "the last n blocks", so the seek= option has to be used to seek to (mediasize in blocks - 33).
....
Code:
% diskinfo -v ada0 | grep 'mediasize in sectors'
   500118192     # mediasize in sectors
500118192 - 33 = 500118159
Code:
dd if=/dev/zero of=/dev/ada0 bs=512 count=33 seek=500118159
33+0 records in
33+0 records out
16896 bytes transferred in 0.064181 secs (263255 bytes/sec)
Old thread, but still actual.
Before zeroing I dumped that 33-sector tail in a file (with dd()), then inspected it with hexedit() and found a specific block with "EFI" string, everything else was already filled with 0s. Thus I narrowed down to one block. Then I zeroed just that block with count=1 flag of dd().
 
Back
Top