Solved Need help on RAID1 modification

Updated:
After shutdown -r now, the system rebooted with success! Should I need to give more space for efi partition, or stop here? (malgré que Zirias said it's enough for 13, but future version?)
P/S: perhaps I will not upgrade to 13 until 12 ends its LTS, because asiatique dont like numéro 13 :) :) We delay 13 now, could we publish next time 14? As Windows lacks of 9, jumping from 8 to 10 :) :)

I have a question: if we delete "bios legacy" boot, it will lack of a opportunity to rescue HDD in case of failure of mainboard and of that we have just a legacy-boot-able mainboard, that chance is rare even though.
I think I could not delete legacy partition, but I should shrink swap partition and create with a larger efi partition. Right?
 
Should I need to give more space for efi partition, or stop here? (malgré que [FONT=monospace]Zirias[/FONT] said it's enough for 13, but future version?)
I wouldn't do that unless there's a need. My server is running on 13-RC3 right now with just 1MB efi partitions. The 800k partitions a FreeBSD-11 installer created back then were too small on 13, so I increased partition size…
I have a question: if we delete "bios legacy" boot, it will lack of a opportunity to rescue HDD in case of failure of mainboard and of that we have just a legacy-boot-able mainboard, that chance is rare even though.
It's pretty unlikely to come across a mainboard without working UEFI, and it will become more unlikely in the future. Having a mainboard that insists on a FAT-32 ESP is unlikely as well, but as already quoted in this thread, can happen.

Still, if your mainboard dies, you have other problems, so I think creating some bootable USB stick to fix the boot problem wouldn't be too much additional hassle. But it's of course up to you to decide. If you want to play it entirely safe, yes, create a 200MB ESP, format it with FAT-32, additionally create a 512k freebsd-boot partition and install legacy bootcode with gpart.
 
I stop to increase efi, then. Indeed, I have no need. In long time later, if needed, I will enlarge efi or re-install OS.
With suggestion from SirDice

# swapoff -a
# gmirror destroy swap
# gmirror destroy swap2
# nano /etc/fstab
# swapon -a

I have now 4x4 GB of swap space that has no need of mirroring, nor of striping.

From this thread, I have learnt about zpool and gmirror, thanks to Mjölnir Zirias SirDice
 
EFI simple file system protocol will read FAT12,FAT16 and FAT32 as it's pointed in UEFI spec. So you will never face a Motherboard with UEFI spec that will be unable to read that partition. The problem here is the actual sector size on the hard disk which for normal SATA disk are 512emulated on AF and for SAS disk are 4Kn (4096) so when you format the EFI partition the sector size per cluster must be aligned to the physical sector size that's why the UEFI specification pointed to use FAT32 for non-removable disk. If you are going to create the EFI partition by hand you need to leave the proper space after the partition so the next partition to start on the exact start of 4K sector on the disk otherwise your will suffer from low disk performance. For the systems that still use CHS there's a jumper on the hard disk that cause the firmware on the AF disk to add +1 LBA on every request from the controller so the actual alignment on the disk to be correct.
Anyway your freebsd-zfs partition is OK so no need to do anything else.
 
EFI simple file system protocol will read FAT12,FAT16 and FAT32 as it's pointed in UEFI spec. So you will never face a Motherboard with UEFI spec that will be unable to read that partition. The problem here is the actual sector size on the hard disk which for normal SATA disk are 512emulated on AF and for SAS disk are 4Kn (4096) so when you format the EFI partition the sector size per cluster must be aligned to the physical sector size that's why the UEFI specification pointed to use FAT32 for non-removable disk. If you are going to create the EFI partition by hand you need to leave the proper space after the partition so the next partition to start on the exact start of 4K sector on the disk otherwise your will suffer from low disk performance. For the systems that still use CHS there's a jumper on the hard disk that cause the firmware on the AF disk to add +1 LBA on every request from the controller so the actual alignment on the disk to be correct.
Anyway your freebsd-zfs partition is OK so no need to do anything else.
Mine is SATA. And tomorrow I will repeat steps to re-create efi partitions, then. In the past, I don't know why sometimes I see some free spaces between 2 partitions (sorry, I am not professional in IT). Now I know.
 
You don't have to re-create the efi partition. It's easy in your case as the next partition is your swap and always can shrink your swap partition if it's needed in the future.
 
With suggestion from SirDice
IMHO this is a misunderstanding & SirDice did not suggest this:

# swapoff -a
# gmirror destroy swap
# gmirror destroy swap2
# nano /etc/fstab
# swapon -a

I have now 4x4 GB of swap space that has no need of mirroring, nor of striping.
  1. The swap devices are interleaved (striped) by default.
  2. Obviously it's much safer to have a mirrored swap. Just add two gmirror(1)s. It would be silly not to use that advantage when you can easily have it for free.
 
I think every thing done.
Here is updated current status of my machine:
# gpart show
Code:
=>        40  3907029088  ada0  GPT  (1.8T)
          40      409600     1  efi  (200M)
      409640        4096        - free -  (2.0M)
      413736     7780312     3  freebsd-swap  (3.7G)
     8194048        2048        - free -  (1.0M)
     8196096  3898832896     4  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

=>        40  3907029088  ada1  GPT  (1.8T)
          40      409600     1  efi  (200M)
      409640        4096        - free -  (2.0M)
      413736     7780312     3  freebsd-swap  (3.7G)
     8194048        2048        - free -  (1.0M)
     8196096  3898832896     4  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

=>        40  3907029088  ada2  GPT  (1.8T)
          40      409600     1  efi  (200M)
      409640        4096        - free -  (2.0M)
      413736     7780312     3  freebsd-swap  (3.7G)
     8194048        2048        - free -  (1.0M)
     8196096  3898832896     4  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

=>        40  3907029088  ada3  GPT  (1.8T)
          40      409600     1  efi  (200M)
      409640        4096        - free -  (2.0M)
      413736     7780312     3  freebsd-swap  (3.7G)
     8194048        2048        - free -  (1.0M)
     8196096  3898832896     4  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

# swapinfo -h
Code:
Device          1K-blocks     Used    Avail Capacity
/dev/mirror/swap1   3890152       0B     3.7G     0%
/dev/mirror/swap2   3890152       0B     3.7G     0%
Total             7780304       0B     7.4G     0%

# gmirror status
Code:
        Name    Status  Components
mirror/swap1  COMPLETE  ada0p3 (ACTIVE)
                        ada1p3 (ACTIVE)
mirror/swap2  COMPLETE  ada2p3 (ACTIVE)
                        ada3p3 (ACTIVE)

# gmirror list
Code:
Geom name: swap1
State: COMPLETE
Components: 2
Balance: load
Slice: 4096
Flags: NOAUTOSYNC
GenID: 0
SyncID: 1
ID: 141508579
Type: AUTOMATIC
Providers:
1. Name: mirror/swap1
   Mediasize: 3983519232 (3.7G)
   Sectorsize: 512
   Mode: r1w1e0
Consumers:
1. Name: ada0p3
   Mediasize: 3983519744 (3.7G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 211832832
   Mode: r1w1e1
   State: ACTIVE
   Priority: 0
   Flags: NONE
   GenID: 0
   SyncID: 1
   ID: 2471190919
2. Name: ada1p3
   Mediasize: 3983519744 (3.7G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 211832832
   Mode: r1w1e1
   State: ACTIVE
   Priority: 1
   Flags: NONE
   GenID: 0
   SyncID: 1
   ID: 654687576

Geom name: swap2
State: COMPLETE
Components: 2
Balance: load
Slice: 4096
Flags: NOAUTOSYNC
GenID: 0
SyncID: 1
ID: 236687650
Type: AUTOMATIC
Providers:
1. Name: mirror/swap2
   Mediasize: 3983519232 (3.7G)
   Sectorsize: 512
   Mode: r1w1e0
Consumers:
1. Name: ada2p3
   Mediasize: 3983519744 (3.7G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 211832832
   Mode: r1w1e1
   State: ACTIVE
   Priority: 0
   Flags: NONE
   GenID: 0
   SyncID: 1
   ID: 560262868
2. Name: ada3p3
   Mediasize: 3983519744 (3.7G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 211832832
   Mode: r1w1e1
   State: ACTIVE
   Priority: 1
   Flags: NONE
   GenID: 0
   SyncID: 1
   ID: 2961912930

# zpool status
Code:
pool: zroot
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        ada0p4  ONLINE       0     0     0
        ada1p4  ONLINE       0     0     0
      mirror-1  ONLINE       0     0     0
        ada2p4  ONLINE       0     0     0
        ada3p4  ONLINE       0     0     0

errors: No known data errors
 
Obviously it's much safer to have a mirrored swap.
Maybe in the hope for the running system to survive a disk failure without reboot, even when currently swapping. But the downside is even further reduced swapping performance. I personally think striping is the better choice here – anyone should make their own informed decision ;)
 
Maybe in the hope for the running system to survive a disk failure without reboot, even when currently swapping. But the downside is even further reduced swapping performance. I personally think striping is the better choice here – anyone should make their own informed decision ;)
It's not only about device failure, but about bit flip errors, too. Unfortunately, gmirror(8) has no built-in checksums like ZFS, so we gain a cut of 1/2 probability (because it depends on the amount of data read); better than none. Read performance will (nearly) not suffer at all, and writing to swap will not suffer significantly if the mirrored disks have separate data paths. Since the OP is a medic, not an IT guy, s/he has to rely on our recommendations. If my recommendation has flaws, please convince me that I'm wrong.
Switch2BSD, IMHO your setup is fine now. Maybe there are arguments to prefer the split balance algorithm, and use a slice value of 64 kB or 128 kB, or round-robin. I have no experience concerning this. The default of load should be sufficient, maybe you can gain a small but significant performance improvement.
 
It's not only about device failure, but about bit flip errors, too.
These typically take some time for happening, and swap typically is for short-term storage (best if not used at all), so I wouldn't be interested in that kind of protection. But again, it's about making your own informed decision :)
Read performance will (nearly) not suffer at all, and writing to swap will not suffer significantly if the mirrored disks have separate data paths.
Both will (potentially) suffer. A striped setup can do I/O on all devices in parallel, if the situation and hardware setup permits it.
If my recommendation has flaws, please convince me that I'm wrong.
Not exaclty flaws. I just think the very low probability of running into problems because of swap, combined with the expected "damage" (wich might be a system crash and losing some data that isn't persisted) for me(!) doesn't justify using swap on mirrored devices.
 
I have read somewhere about stripping of swap (RAID0) in order to improve write performance of disk, as Zirias says. That's why I deleted gmirror of swap, yesterday.
However, great papa Mjölnir said that it will be safer if using mirror (+stripe automagically) (RAID10). That's why I correct it today. This points me to something likely ECC RAM, but not applicable because gmirror is not zfs (https://wiki.freebsd.org/RootOnZFS#ZFS_Swap_Volume).
Both of ideas seems to me reasonable. As a medic, I don't know which is right choice. However, swap partition is easy to switch between two ideas. Any suggestion to test on both methods?
In addition, I do not understand yet is the reason why mirror of 2-way less secure than of 4-way. Failure of both disks in one side?
 
In addition, I do not understand yet is the reason why mirror of 2-way less secure than of 4-way. Failure of both disks in one side?
Yes. Look at this RAID-10: mirror A1-A2 striped to mirror B1-B2: now if both A{1,2} fail, the virtual device is broken. In a 4-way mirror A1-A2-A3-A4, when one device failed, still any 2 out of the 3 remaining devices can fail. In a 2-way RAID-10, it's only one out of 2 of the remaining 3; if that one specific (of the degraded mirror) fails, the RAID-10 fails completely.
 
Yes. Look at this RAID-10: mirror A1-A2 striped to mirror B1-B2: now if both A{1,2} fail, the virtual device is broken. In a 4-way mirror A1-A2-A3-A4, when one device failed, still any 2 out of the 3 remaining devices can fail. In a 2-way RAID-10, it's only one out of 2 of the remaining 3; if that one specific (of the degraded mirror) fails, the RAID-10 fails completely.
In the hypothese of multi-disk failure, is there any statistics for a cutoff in which minimum number of disks could assure highest rate of disk availability? I would mean likely cost-effective.
 
Please be aware that using 2-way mirrors is commonly considered dangerous in a professional environment; for use @home and/or for non-vital data, it's ok if the disks are small enough, i.e. < 9TB.
Using 3-way mirrors (or RAID w/ two double parity disks/slices) with one (automagically sync'ing*) hot-spare is common practice in professional environments. For disks >9TB, some decide to go 4-way mirrors (or triple parity RAID) for mission-critical data or servers.
* Once in a while, the friendly service staff shoves a box full of replacement disks through the data centre and replaces the failed devices.
 
I have tried to remove one mirror from stripe (of RAID10), but impossble. I read that I should destroy zpool (zroot) and re-create zroot to have it 3-way RAID1 and set the fourth disk as hot spare.
However, I think RAID1 4-way vs. 4th disk as hot spare is the same in meaning of utilisable disk space. Then, now I set a 4-way RAID1 for my machine.
I really walked a round cycle, but I have learnt of things to do around. My problem could set as solved. Many thanks.
 
I have tried to remove one mirror from stripe (of RAID10), but impossble. I read that I should destroy zpool (zroot) and re-create zroot to have it 3-way RAID1 and set the fourth disk as hot spare.
However, I think RAID1 4-way vs. 4th disk as hot spare is the same in meaning of utilisable disk space. Then, now I set a 4-way RAID1 for my machine.
Of course you can not take out one mirror of a striped RAID-10 (mirror A - mirror B). Think about it: any sufficiently large data chunk is split 50/50 between the two halfes A & B. When you take off one side, 50% of the data can not be reconstructed since that 50% is completely on one side. But you can take out up to (N-1) sides of each N-way mirror. The main difference between 3-way mirror + hot spare vs. 4-way mirror is that the hot spare does not age so much until it's actually used.
I really walked a round cycle, but I have learnt of things to do around. My problem could set as solved. Many thanks.
Only the OP can set the thread to "solved". On the very 1st post, click "..." -> "edit thread" -> "prefix: solved".
 
Back
Top