gmirror With GPT on boot volume

I'm installing a new 13.2-RELEASE system with dual M.2 Samsung EVO 990 PRO 1TB SSDs. I'm attempting to make the second SSD a gmirror array, but it fails with:

gmirror: Can't store metadata on /dev/nvd0: Operation not permitted.

In my extensive searches, this appears to be that both GPT and gmirror store their metadata at the end of the disk. Unfortunately, I can't set it as MBR because the motherboard will only boot GPT partitions. And all of the Google and FreeBSD forum searches I've seen are either 8+ years old advice-wise or are examples of gmirror with disks that are not the boot volume.

Is there an easier solution to this than complex partitioning? Ideally I'd like to be able to convert the existing system in place to a gmirror array, but a reinstall is OK as well, since this is a fresh install. Another option would be for me to dd the first disk to the second, then just mount and rsync it every night - that would be useful, but less preferable to having a volume that would boot automatically in the case of an SSD failure would be preferable to having to tell BIOS to boot off the other disk.

My dad is on a complete different set of drives, so that's not a consideration nor concern. It's only the boot volume and OS.

Thanks!

-->Neil
 
You are correct in that both GPT and gmirror store their metadata at the end of the disk, and are incompatible when glabel is applied to a whole disk.

I believe that you can't use a software mirror on a whole disk with an EFI partition because the firmware can write to the EFI partition while the operating system is not running, thus corrupting the O/S specific mirror.

Are you using ZFS? If so, you may have an easy way to mirror. For UFS, your solution may lie in labeling partitions (with tunefs -L) and mirroring those partitions. Warren Block shows how to do it (and warns against it if you have more than one partition because of head contention on mirror rebuilding).

But first, show us the output of gpart show.
 
My understanding is when you use UEFI OS for UEFI it points to the gmirror EFI partition via UUID.
Not individual disks.


Code:
root@x9srl:/home/firewall # gmirror status
      Name    Status  Components
mirror/gm0  COMPLETE  ada0 (ACTIVE)
                      ada1 (ACTIVE)
root@x9srl:/home/firewall # gpart show /dev/mirror/gm0
=>      40  31276976  mirror/gm0  GPT  (15G)
        40    409600           1  efi  (200M)
    409640  30867376           2  freebsd-ufs  (15G)

Code:
oot@x9srl:/home/firewall # efibootmgr
Boot to FW : false
BootCurrent: 0007
Timeout    : 1 seconds
BootOrder  : 0007, 0008, 0004
 Boot0008    UEFI: FiD 2.5 SATA10000
 Boot0004    UEFI: Built-in EFI Shell
+Boot0007*   UEFI OS

Code:
UEFI OS HD(1,GPT,51cc9663-1b06-11ec-89b9-00e0ed5d51ce,0x28,0x64000)/File(\EFI\BOOT\BOOTX64.EFI)
                      mirror/gm0p1:/EFI/BOOT/BOOTX64.EFI
 
because the firmware can write to the EFI partition
This is only true if mounted. Checkout my how-to. I don't mount EFI.
There really is no need to unless you are running efibootmgr to change something or updating EFI file.
The later could be security concern and good reason to not have auto mounted.
echo "/dev/mirror/gm0p2 / ufs rw 1 1" >> /tmp/bsdinstall_etc/fstab
 
I believe that you can't use a software mirror on a whole disk with an EFI partition because the firmware can write to the EFI partition while the operating system is not running, thus corrupting the O/S specific mirror.
I am not trying to pick a fight here but I don't believe this is the case at all.

Because I have studied Intels EDK2 toolkit I have learned some.

The firmware (EDK2) aka BIOS does not write to the EFI partition that I know of.

Here is an example. Where are efibootmgr settings being stored when you change boot order?
It is not in the EFI partition (And the EFI loader file there) but in EFI Variables stored with firmware.

I don't think the bios/firmware ever touches the filesystem. /EFI/BOOT/BOOTx.EFI is just the first stage loader.
 
I have to admit that I have no specific experience with EFI on FreeBSD.

I have used it on Linux, and my understanding comes from those experiences.

I will have to do some research to support my assertion that EFI partitions can be modified completely outside the scope and control of the operating system (and are thus not candidates for a software mirror), but my belief is that's the case.
 
It's just not worth it. You'll wind up with two disks that are only partially mirrored at best. That's likely to cause problems. I'd go the ZFS mirror route like gpw928 suggests.
 
Code:
[21:00:14] [buko!titus]~$gpart listGeom name: raid/r0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 976773126
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: raid/r0p1
   Mediasize: 65536 (64K)
   Sectorsize: 512
   Stripesize: 4096
works for me. i think i created the raid before the part scheme from the installers shell
 
i think i created the raid before the part scheme from the installers shell
Defiantly another way to do it. With that method you may not have to manually build fstab ect...
There are many ways I am sure...

I searched all gmirror PR's and I don't see anything. Nothing in the gmirror manpage either.

I am coming up on 4 years using this setup. I have many different machines with it.

I actually use 3 disks on my firewall for a cold disk backup. I can assure you all three disks can work by themselves.

Machines with older versions of UEFI may not work well with 'UEFI OS' as boot disk. That is the absolute worst problem I found.
No data loss just need manual intervention on boot to point to disk on reboot with a test 'failed' disk.

Test it out yourself. I did.
I don't think the handbook example is very up to date.
 
This seems to be reference to the supposed problem.
I just not seeing any problems.
This is basically what it ends up doing already. You make a GPT covering the whole disk and the GPT uses the last sector for the backup. You then make a gmirror inside this GPT container and it uses the last sector of the container (second-to-last sector of the drive) for its gmirror stored metadata.
Notice my approach is totally backwards from this.
I make a gmirror container and stuff GPT in it.
Somehow GPT must end up in second to last sector.
 
I will have to do some research to support my assertion that EFI partitions can be modified completely outside the scope and control of the operating system (and are thus not candidates for a software mirror), but my belief is that's the case.
Occasionally I build an appliance that boots from a USB stick, in which case I keep a dd copy of the whole stick, and can re-make it at any time, either with dd (to an identically sized device) or with a loop-back mount and copying the file systems.

Otherwise I always mirror Unix roots. "disks" are cheap, and the work to recover a lost disk represents grief I don't need.

I have not used EFI with FreeBSD, but I gave up trying to mirror EFI partitions under Linux. The installers go to great lengths to prevent it. Ubuntu even has special features to support multiple identical EFI partitions. One problem is that Linux RAID signatures can prevent the firmware from correctly identifying the EFI partition (this is surmountable by changing the signature format). However...

I knew I had seen multiple references to EFI partitions being modified by firmware. Here is one example "HOWEVER there is one nasty risk with this setup: if UEFI writes anything to one of the drives (which this firmware did when it wrote out a “boot variable cache” file), it may lead to corrupted results".

I don't believe that EFI partitions get routinely modified by firmware. And there are many instances of people working around the impediments to get them mirror'd -- without apparent ill effect. But the fact that they can be modified is enough to scare me off.

Of course, everything you read on the Internet might be worth what you paid for it. Perhaps more, perhaps less. I'd be very happy to be proved wrong, because software mirroring a whole disk is what I really want!
 
those are software raids, they do nothing in hardware, everything is in the driver
they write some metadata at the end of the disk and thats it
you can use them without bios support too
 
Does anyone know if these integrated RAID controllers expose the individual physical disks for access by SMART?
Yes you have access to individual disks.

root@ftp:/dev # camcontrol devlist
<Hitachi HDT721010SLA360 ST6OA3AA> at scbus1 target 0 lun 0 (ada0,pass0)
<Hitachi HDS721010CLA332 JP4OA3MA> at scbus2 target 0 lun 0 (ada1,pass1)
<AHCI SGPIO Enclosure 2.00 0001> at scbus3 target 0 lun 0 (ses0,pass2)
root@ftp:/dev # graid status
Name Status Components
raid/r0 OPTIMAL ada0 (ACTIVE (ACTIVE))
ada1 (ACTIVE (ACTIVE))


Code:
root@ftp:/dev # graid list
Geom name: Intel-6f98a068
State: OPTIMAL
Metadata: Intel
Providers:
1. Name: raid/r0
   Mediasize: 993211187200 (925G)
   Sectorsize: 512
   Mode: r2w2e3
   Subdisks: ada0 (ACTIVE), ada1 (ACTIVE)
   Dirty: No
   State: OPTIMAL
   Strip: 65536
   Components: 2
   Transformation: RAID1
   RAIDLevel: RAID1
   Label: raid1
   descr: Intel RAID1 volume
Consumers:
1. Name: ada0
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Mode: r1w1e1
   ReadErrors: 0
   Subdisks: r0(raid1):0@0
   State: ACTIVE (ACTIVE)
2. Name: ada1
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Mode: r1w1e1
   ReadErrors: 0
   Subdisks: r0(raid1):1@0
   State: ACTIVE (ACTIVE)
 
those are software raids, they do nothing in hardware, everything is in the driver
they write some metadata at the end of the disk and thats it
you can use them without bios support too
I have never looked closely at VROC before, Intel's documentation says that it's possible to boot form RAID, suggesting that there is RAID support outside the operating system.
 
Back
Top