Hi guys... recently I created a new pool on my home server...
everything worked like a charm... I also created datasets, put the data in place... unfortunelly after reboot I got this error
then I took again the handbook... looks like there is also the way of creating a pool by creating first the GPT tables and partitions.
So which one is the correct way of doing it? in case of a disk failure is any benefit to have GPT partitions instead of direct ZFS on disk blocks?
Thank you!
Later edit:
I found a way to get rid of that error.
forums.FreeBSD.org
The standard backup GPT is 33 blocks long, so erasing the last 33 blocks on the disk with dd(1) should be enough to avoid the error without interfering with the ZFS data. dd(1) does not have a way to say "the last n blocks", so the seek= option has to be used to seek to (mediasize in blocks - 33).
500118192 - 33 = 500118159
Code:
zpool create mynewpool mirror /dev/da0 /dev/da1
everything worked like a charm... I also created datasets, put the data in place... unfortunelly after reboot I got this error
Code:
GEOM: da0: the primary GPT table is corrupt or invalid.
GEOM: da0: using the secondary instead -- recovery strongly advised.
then I took again the handbook... looks like there is also the way of creating a pool by creating first the GPT tables and partitions.
So which one is the correct way of doing it? in case of a disk failure is any benefit to have GPT partitions instead of direct ZFS on disk blocks?
Thank you!
Later edit:
I found a way to get rid of that error.

ZFS - GPT table corrupt
I have a NAS running FreeBSD 10.1 with 3 disks: ada2 is the boot device, ada0 and ada1 are a ZFS mirror. dmesg shows this for the ZFS mirror: GEOM: ada0: the primary GPT table is corrupt or invalid. GEOM: ada0: using the secondary instead -- recovery strongly advised. GEOM: ada1: the primary...
The standard backup GPT is 33 blocks long, so erasing the last 33 blocks on the disk with dd(1) should be enough to avoid the error without interfering with the ZFS data. dd(1) does not have a way to say "the last n blocks", so the seek= option has to be used to seek to (mediasize in blocks - 33).
Code:
diskinfo -v ada0 | grep 'mediasize in sectors'
500118192 # mediasize in sectors
500118192 - 33 = 500118159
Code:
dd if=/dev/zero of=/dev/ada0 bs=512 count=33 seek=500118159
33+0 records in
33+0 records out
16896 bytes transferred in 0.064181 secs (263255 bytes/sec)