Installing to mirrored CF cards (GPT and gmirror)

I'm doing my first install of FreeBSD soon and my plan is to boot my first FreeBSD system off mirrored CF cards.

My understanding is that the current install utility is not well suited to doing complicated things at install time. I'm OK with taking my install and mirroring it later.

It looks like gmirror is the prefered way to do mirroring and GPT is the preferred way to do partitioning. However I have read that they can conflict, because they both try to use same area at end of disk.

Is it possible to setup gmirror first and then set GPT to only take up the first X blocks? Should I maybe just mirror my root partition? What happens with boot and swap data then? Am I better off just not using GPT and/or gmirror? What about VVM?

Thanks!

PS: I did consider using ZFS for root; have read some guides on how to do it, but it looks like that would just be more complex!
 
I take that those CF cards are very small? You could just as well use MBR partioning on them, GPT won't buy you anything but a slightly less complicated partitioning setup.
 
ctengel said:
I'm doing my first install of FreeBSD soon and my plan is to boot my first FreeBSD system off mirrored CF cards.

My understanding is that the current install utility is not well suited to doing complicated things at install time. I'm OK with taking my install and mirroring it later.

It looks like gmirror is the prefered way to do mirroring and GPT is the preferred way to do partitioning. However I have read that they can conflict, because they both try to use same area at end of disk.

Is it possible to setup gmirror first and then set GPT to only take up the first X blocks?

No, that's the configuration that doesn't work. The other way around works, creating GPT partitions on both drives and then mirroring those partitions instead of the whole drive.

Should I maybe just mirror my root partition? What happens with boot and swap data then? Am I better off just not using GPT and/or gmirror? What about VVM?

It depends. The simplest and most compatible setup is creating a mirror and partitioning it with MBR. (Don't use the Handbook mirror procedure, it means well but creates a broken layout.) GPT works with gmirror if you mirror partitions instead of disk devices; see gmirror With Disk Partitions. It's not necessary to create all those partitions, a 9.0-style layout where there is just a boot partition, the / filesystem, and swap would work.
 
Use gmirror to mirror the entire CF disks, creating /dev/mirror/gm0.

Then crease a standard MBR slice on the mirror device, creating /dev/mirror/gm0s1.

Then create a standard BSD label inside the MBR slive, and create all your normal UFS/swap partitions, creating /dev/mirror/gm0s1a etc.
 
@kpa
The CF cards are 4GB. I guess that is small. I've never had any issues with DOS fdisk/MBR at that size. Is the general push towards GPT these days more to do with size or number of partitions or something else? I don't plan on doing much of anything fancy with many partitions, unless there is some benefit.

@wblock@
Thanks for the info. I take it that then if I had gmirrored partitions within GPT, a typical disk layout would go something like: (with only 2 as opposed to 3 for simplicity's sake)
-partition 1 data
-partition 1 gmirror superblock
-partition 2 data
-partition 2 gmirror superblock
-GPT info with 2 partition layout

And then with MBR, (which I am far more familiar with; I don't think I even have any GPT systems at this time)
-MBR (first 512 bytes)
-partition 1 data
-partition 1 gmirror superblock
-partition 2 data
-partition 2 gmirror superblock

Do I understand these correctly?

@phoenix:
I guess you are suggesting this way because MBR allows whole disk to be included in mirror without conflicting metadata?
Is it harder to boot that (a BSD partition within a MBR slice?)
And once again, to make sure I understand the on-disk layout (admittedly I know nothing of BSD label/partitions so I just lumped all that together in middle)
-MBR
-(BSD label and partitions,within MBR "slice"/partition 1)
-gmirror info for whole disk

Looks like I have a few methods to think about and choose from, thanks for the ideas all!

PS: I take it back that I know nothing of BSD disklabels; some brief research indicates they seem similar to Solaris VTOCs/SMI labels; which I guess probably date back to the BSD-like SunOS days. Wild guess though. What confuses me most though is terminology. BSD people seem to call MBR partitions "slices" and then have BSD "partitions", yet in Solaris VTOC talk, those same things are called slices!
 
ctengel said:
@kpa
The CF cards are 4GB. I guess that is small. I've never had any issues with DOS fdisk/MBR at that size. Is the general push towards GPT these days more to do with size or number of partitions or something else? I don't plan on doing much of anything fancy with many partitions, unless there is some benefit.

GPT is a cleaner partitioning setup. However, the only time you need to use it is if you want to create partitions over 2 TB in size, or located across the 2 TB boundary. MBR partitions cannot be larger than 2 TB, nor can they cross the 2 TB boundary.



@phoenix:
I guess you are suggesting this way because MBR allows whole disk to be included in mirror without conflicting metadata?

Correct. It's much simpler this way. You mirror the entire disk, then treat the mirror device like any other normal disk: slice, partition, format, carry on.

Is it harder to boot that (a BSD partition within a MBR slice?)

That's the normal way to boot a FreeBSD system. :)

And once again, to make sure I understand the on-disk layout (admittedly I know nothing of BSD label/partitions so I just lumped all that together in middle)
-MBR
-(BSD label and partitions,within MBR "slice"/partition 1)
-gmirror info for whole disk

Yeah, simple as that.
 
ctengel said:
I take it that then if I had gmirrored partitions within GPT, a typical disk layout would go something like: (with only 2 as opposed to 3 for simplicity's sake)
-partition 1 data
-partition 1 gmirror superblock
-partition 2 data
-partition 2 gmirror superblock
-GPT info with 2 partition layout

Not quite:

GPT primary partition table (and PMBR)
freebsd-boot
freebsd-swap
freebsd-ufs (last block used by gmirror metadata)
GPT backup partition table

The boot partition can be easily recreated, and the swap partition contents are recreated by the system, so mirroring those is optional. The UFS partition is the filesystem. When it's mirrored, the gmirror metadata reserves the last block of the partition for metadata. It returns a size of n-1 blocks, so that metadata won't be overwritten.

And then with MBR, (which I am far more familiar with; I don't think I even have any GPT systems at this time)
-MBR (first 512 bytes)
-partition 1 data
-partition 1 gmirror superblock
-partition 2 data
-partition 2 gmirror superblock

Do I understand these correctly?

MBR and gmirror are traditionally used to mirror the whole drive:

MBR
partition 1
partition 2
gmirror metadata for the drive
 
phoenix said:
GPT is a cleaner partitioning setup. However, the only time you need to use it is if you want to create partitions over 2 TB in size, or located across the 2 TB boundary. MBR partitions cannot be larger than 2 TB, nor can they cross the 2 TB boundary.

GPT also makes using more than four partitions possible without resorting to unpleasantness like extended partitions or BSD disklabels.
 
Mirroring aside, is my understanding correct that the norm is shifting (or has shifted) from having a BSD disklabel (with all the partitions you need: boot, root, swap, etc) inside of a single MBR "slice" (with some sort of bootloader in the MBR pointing to the boot slice in the disklabel) to a GPT setup? (which generally is encapsulated within a "protective MBR")

What I'm still a bit confused about what happens when a gmirror is created on an existing UFS filesystem. Is it destructively shrunk automatically to fit the metadata after?

(and then as a follow up to that, does the "gm0" "provider" include that metadata, or is it just the disk up to that?)
 
ctengel said:
Mirroring aside, is my understanding correct that the norm is shifting (or has shifted) from having a BSD disklabel (with all the partitions you need: boot, root, swap, etc) inside of a single MBR "slice" (with some sort of bootloader in the MBR pointing to the boot slice in the disklabel) to a GPT setup?

Yes.

(which generally is encapsulated within a "protective MBR")

No, the PMBR doesn't really encapsulate anything (although it appears as a one-slice MBR of max 2T size), it's just a backwards-compatibility thing to boot a GPT disk on a standard BIOS.

What I'm still a bit confused about what happens when a gmirror is created on an existing UFS filesystem. Is it destructively shrunk automatically to fit the metadata after?

A mirror is a container, with the filesystem inside it. The mirror is created first, then the filesystem is created inside that container. Unfortunately, the Handbook procedure takes a shortcut that overwrites the last block of an existing filesystem with the mirror metadata. That's a quick and well-intentioned hack. A better way to do that: http://lists.freebsd.org/pipermail/freebsd-doc/2012-January/019449.html

(and then as a follow up to that, does the "gm0" "provider" include that metadata, or is it just the disk up to that?)

It's a container. The metadata is outside it ("meta"). Compare the size in sectors of the mirror with the size in sectors of the raw device.
 
Thanks. Not quite sure why that idea didn't occur to me initially (create second disk as part of a single disk "mirror", dump everything over, add the first disk to the mirror)
 
This page is also much nicer than the Handbook pages on gmirror. And the scripts provided make it super simple to play around with, and to copy configurations around between systems. I used it for the first 3 storage boxes I made (UFS on gmirror on CF disks).
 
Back
Top