the correct way create a zfs mirror pool

Hi guys... recently I created a new pool on my home server...
Code:
zpool create mynewpool mirror /dev/da0 /dev/da1

everything worked like a charm... I also created datasets, put the data in place... unfortunelly after reboot I got this error

Code:
GEOM: da0: the primary GPT table is corrupt or invalid.
GEOM: da0: using the secondary instead -- recovery strongly advised.

then I took again the handbook... looks like there is also the way of creating a pool by creating first the GPT tables and partitions.

So which one is the correct way of doing it? in case of a disk failure is any benefit to have GPT partitions instead of direct ZFS on disk blocks?

Thank you!

Later edit:

I found a way to get rid of that error.


The standard backup GPT is 33 blocks long, so erasing the last 33 blocks on the disk with dd(1) should be enough to avoid the error without interfering with the ZFS data. dd(1) does not have a way to say "the last n blocks", so the seek= option has to be used to seek to (mediasize in blocks - 33).

Code:
diskinfo -v ada0 | grep 'mediasize in sectors'
   500118192     # mediasize in sectors

500118192 - 33 = 500118159

Code:
dd if=/dev/zero of=/dev/ada0 bs=512 count=33 seek=500118159
33+0 records in
33+0 records out
16896 bytes transferred in 0.064181 secs (263255 bytes/sec)
 
I had problems creating a zfs mirror with Freebsd installer, it would not allow 2 nvme drives to form a mirror even when the smaller drive (500GB) was listed first and the 512GB second. therefore, i created a striped drive with the smaller and followed these instructions. It worked and rebooted fine.
 

Attachments

  • zfsmirror.txt
    957 bytes · Views: 44
So which one is the correct way of doing it?
They both are, with or without partition tables. But if you are going to use the whole disk with ZFS you better make sure the old partition table on that disk is completely removed ( gpart destroy ....). Or else some of the old metadata will get picked up, which triggers the "the primary GPT table is corrupt or invalid." error.
 
and in case of a disaster... which one is more useful? using a GPT table with slices or the entire disk as zfs?
 
I ran across this not long after it was written. It made sense to me, so I've been following it since then.
Not sure if it's better or worse than whole disk, but I think the way some SSDs may want to reallocate blocks and such without you knowing, it can help if you partition and leave some free space that the device can use.

 
and in case of a disaster... which one is more useful?
When the proverbial feces hits the fan it's not going to matter much. It's mostly useful for us humans, having a partition makes identifying what's on the disk easier if you happen to put that disk in a system that doesn't support ZFS. Without a partition table you (or the system you put it in) might think it's empty.
using a GPT table with slices
GPT only has partitions, MBR has slices and partitions.
 
Well...puttin does disks in a system without ZFS would not be the case.
I'm thinking more if one of the disk fails and I need to replace it... and perhaps I don't find the same brand/model and I put other brand/model that has the capacity smaller with couple of bytes and than the older one... is this doable? also, if at some point I want to replace the disks with larger ones, is also doable with zpool autoexpand=on?
 
I put other brand/model that has the capacity smaller with couple of bytes and than the older one... is this doable?
If it's too small, no.

, if at some point I want to replace the disks with larger ones, is also doable with zpool autoexpand=on?
Yes. It's not going to matter if there's a partition table with a partition or not. The partition on the new, bigger, disk does need to span the entire disk obviously.
 
GEOM: da0: the primary GPT table is corrupt or invalid.
GEOM: da0: using the secondary instead -- recovery strongly advised.
Looks like the disk had GPT before you used it for ZFS (in whole disk / unpartitioned mode).
ZFS overwrote the primary table at the beginning of the disk but the secondary table at the end survived.
FreeBSD sees the secondary table and warns you about the situation.
It's always a good idea to zero a few megs at the start and end of a disk before re-using it for something completely different.
That would clean up any previous metadata from partitioning schemes, RAID controllers, etc.
 
  • Like
Reactions: mer
Back
Top