Multiple OS's

I've 4 1tb hard drives using ZFS Raid and really don't need that much storage for a single personal PC. Thus, I'm asking for advice on how to re-load BSD 13.0 ZFS on two of those disks and Ubuntu on one and Windblows Windows on the third one. My thought was to erase all four disks, after I back up my files, disconnect two of them and reload Freebsd 13.0 and ZFS. Then disconnect those two drives and hook up one single drive and load Ubuntu and then disconnect that drive and connect the last single drive and load Windblows Windows. My additional unknown is then how to simply boot to any of those OS's. Will GRUB-efi, or GRUB pcbsd work? I've never used GRUB.

I hope this is the correct area to post and honor any and all thoughts. You're correct in assuming I'm simply looking for different ways of using my PC since I'm retired and interested in exploring different OS's. I've been with FreeBSD since 4.9 and absolutely love it, but want to expand my PC experiences now.
 
if your raid is fault tolerant then disconnect a drive, create a new pool on the 'freed' drive, send the original pool to it
when done, attach one of the remaining original 3 drives to the new pool
now you have a mirror raid with the original data
install the other oses on the remaining 2 drives
you can probably use refind (efi boot manager)
 
erase all four disks

Depending on your use case, I might give:
  • the first to operating systems (FreeBSD, Linux and Windows)
  • the other three to a ZFS pool for user data, one pool compatible with both FreeBSD and Linux.
That said, I have no experience with multi-boot partitioning where FreeBSD is in the mix.
 
I've 4 1tb hard drives using ZFS Raid and really don't need that much storage for a single personal PC. Thus, I'm asking for advice on how to re-load BSD 13.0 ZFS on two of those disks and Ubuntu on one and Windblows Windows on the third one. My thought was to erase all four disks, after I back up my files, disconnect two of them and reload Freebsd 13.0 and ZFS. Then disconnect those two drives and hook up one single drive and load Ubuntu and then disconnect that drive and connect the last single drive and load Windblows Windows. My additional unknown is then how to simply boot to any of those OS's. Will GRUB-efi, or GRUB pcbsd work? I've never used GRUB.

I hope this is the correct area to post and honor any and all thoughts. You're correct in assuming I'm simply looking for different ways of using my PC since I'm retired and interested in exploring different OS's. I've been with FreeBSD since 4.9 and absolutely love it, but want to expand my PC experiences now.
To summarize my experience in multi-booting.
1) For multi-boot I successfully use GRUB2. But that would be some of the latest versions from GIT, not sure which version it is in the ports now. I use a hand-made port for myself, but it was never accepted into the ports tree (something there failed to build, with no one willing to fix it).
To put it short, presently GRUB2 works fine for me to multi-boot FreeBSD / OpenBSD / Linux / Windows.

2) Multi-booting with Windows: you dont' want to install it on the 1st drive, but then you'll have to watch it lest Windows installs its boot files on the 1st drive sometime during updates/whatever. Normally it won't because each boot it will be sure it IS installed on the 1st drive... Well, got to be careful -- AND be ready to restore your GRUB2 boot code to the first drive.

3) Since FreeBSD now is using the latest ZOL code compared to all Linux distros, you won't be able to boot any Linux if you install it into the same zfs pool with FreeBSD. Before having adopted ZOL this worked, but now... must check which ZOL version is available in those distros.

4) If you consider using ZFS encryption in your zpool, GRUB2 will have problems booting directly from it. You'll need to use a non-encrypted /boot partition with kernel/modules. You can use full-disk ZFS on one of your disks... though honestly, I've never noticed that much of an advantage in it. So finally I switched back to GPT scheme. Then again, your /boot partition can be anywhere (except for Windows partition LOL).
 
if your raid is fault tolerant then disconnect a drive, create a new pool on the 'freed' drive, send the original pool to it
when done, attach one of the remaining original 3 drives to the new pool
now you have a mirror raid with the original data
install the other oses on the remaining 2 drives
you can probably use refind (efi boot manager)
Thanks for your suggestion. I'm not at all experienced enough to know how to carry out your suggestion. If you have the time maybe you would offer the correct syntax to carry out the task.
 
here is a log of my experiment
move pool orig to pool newone
pool orig is initially raidz, newone will be raid1 (mirror)

Code:
root@hosta:/usr/home/oper # zpool status orig
  pool: orig
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    orig        ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        md1     ONLINE       0     0     0
        md2     ONLINE       0     0     0
        md3     ONLINE       0     0     0
        md4     ONLINE       0     0     0

errors: No known data errors

root@hosta:/usr/home/oper # zfs create orig/a
root@hosta:/usr/home/oper # zfs create orig/h
root@hosta:/usr/home/oper # zfs create orig/h/dd
root@hosta:/usr/home/oper # zfs snapshot -r orig@preback
root@hosta:/usr/home/oper # zfs list -r orig
NAME        USED  AVAIL     REFER  MOUNTPOINT
orig        317K   446M     34.4K  /orig
orig/a     32.9K   446M     32.9K  /orig/a
orig/h     65.8K   446M     32.9K  /orig/h
orig/h/dd  32.9K   446M     32.9K  /orig/h/dd

root@hosta:/usr/home/oper # zfs list -t snapshot -r orig
NAME                USED  AVAIL     REFER  MOUNTPOINT
orig@preback          0B      -     34.4K  -
orig/a@preback        0B      -     32.9K  -
orig/h@preback        0B      -     32.9K  -
orig/h/dd@preback     0B      -     32.9K  -

root@hosta:/usr/home/oper # zpool offline orig md2

root@hosta:/usr/home/oper # zpool status orig
  pool: orig
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
config:

    NAME        STATE     READ WRITE CKSUM
    orig        DEGRADED     0     0     0
      raidz1-0  DEGRADED     0     0     0
        md1     ONLINE       0     0     0
        md2     OFFLINE      0     0     0
        md3     ONLINE       0     0     0
        md4     ONLINE       0     0     0

errors: No known data errors
root@hosta:/usr/home/oper # zpool export orig
root@hosta:/usr/home/oper # zpool create -f newone md2
root@hosta:/usr/home/oper # zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
newone   192M   108K   192M        -         -     5%     0%  1.00x    ONLINE  -
zroot    928G   335G   593G        -         -     1%    36%  1.00x    ONLINE  -
root@hosta:/usr/home/oper # zpool import orig
root@hosta:/usr/home/oper # zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
newone   192M   108K   192M        -         -     5%     0%  1.00x    ONLINE  -
orig     768M   478K   768M        -         -     0%     0%  1.00x  DEGRADED  -
zroot    928G   335G   593G        -         -     1%    36%  1.00x    ONLINE  -
root@hosta:/usr/home/oper # zpool status orig
  pool: orig
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
config:

    NAME        STATE     READ WRITE CKSUM
    orig        DEGRADED     0     0     0
      raidz1-0  DEGRADED     0     0     0
        md1     ONLINE       0     0     0
        md2     OFFLINE      0     0     0
        md3     ONLINE       0     0     0
        md4     ONLINE       0     0     0

errors: No known data errors

root@hosta:/usr/home/oper # zfs send -R orig@prebackup | zfs receive -F newone

root@hosta:/usr/home/oper # zfs list -r newone
NAME          USED  AVAIL     REFER  MOUNTPOINT
newone        256K  95.8M       25K  /newone
newone/a       24K  95.8M       24K  /newone/a
newone/h       48K  95.8M       24K  /newone/h
newone/h/dd    24K  95.8M       24K  /newone/h/dd
root@hosta:/usr/home/oper # zfs list -r newone -t snapshot
NAME                  USED  AVAIL     REFER  MOUNTPOINT
newone@preback          0B      -       25K  -
newone/a@preback        0B      -       24K  -
newone/h@preback        0B      -       24K  -
newone/h/dd@preback     0B      -       24K  -

root@hosta:/usr/home/oper # zpool destroy orig

root@hosta:/usr/home/oper # zpool attach -f newone md2 md1

root@hosta:/usr/home/oper # zpool status newone
  pool: newone
 state: ONLINE
  scan: resilvered 468K in 00:00:09 with 0 errors on Thu Dec 23 23:33:58 2021
config:

    NAME        STATE     READ WRITE CKSUM
    newone      ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        md2     ONLINE       0     0     0
        md1     ONLINE       0     0     0

errors: No known data errors
 
here is a log of my experiment
move pool orig to pool newone
pool orig is initially raidz, newone will be raid1 (mirror)

Code:
root@hosta:/usr/home/oper # zpool status orig
  pool: orig
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    orig        ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        md1     ONLINE       0     0     0
        md2     ONLINE       0     0     0
        md3     ONLINE       0     0     0
        md4     ONLINE       0     0     0

errors: No known data errors

root@hosta:/usr/home/oper # zfs create orig/a
root@hosta:/usr/home/oper # zfs create orig/h
root@hosta:/usr/home/oper # zfs create orig/h/dd
root@hosta:/usr/home/oper # zfs snapshot -r orig@preback
root@hosta:/usr/home/oper # zfs list -r orig
NAME        USED  AVAIL     REFER  MOUNTPOINT
orig        317K   446M     34.4K  /orig
orig/a     32.9K   446M     32.9K  /orig/a
orig/h     65.8K   446M     32.9K  /orig/h
orig/h/dd  32.9K   446M     32.9K  /orig/h/dd

root@hosta:/usr/home/oper # zfs list -t snapshot -r orig
NAME                USED  AVAIL     REFER  MOUNTPOINT
orig@preback          0B      -     34.4K  -
orig/a@preback        0B      -     32.9K  -
orig/h@preback        0B      -     32.9K  -
orig/h/dd@preback     0B      -     32.9K  -

root@hosta:/usr/home/oper # zpool offline orig md2

root@hosta:/usr/home/oper # zpool status orig
  pool: orig
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
config:

    NAME        STATE     READ WRITE CKSUM
    orig        DEGRADED     0     0     0
      raidz1-0  DEGRADED     0     0     0
        md1     ONLINE       0     0     0
        md2     OFFLINE      0     0     0
        md3     ONLINE       0     0     0
        md4     ONLINE       0     0     0

errors: No known data errors
root@hosta:/usr/home/oper # zpool export orig
root@hosta:/usr/home/oper # zpool create -f newone md2
root@hosta:/usr/home/oper # zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
newone   192M   108K   192M        -         -     5%     0%  1.00x    ONLINE  -
zroot    928G   335G   593G        -         -     1%    36%  1.00x    ONLINE  -
root@hosta:/usr/home/oper # zpool import orig
root@hosta:/usr/home/oper # zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
newone   192M   108K   192M        -         -     5%     0%  1.00x    ONLINE  -
orig     768M   478K   768M        -         -     0%     0%  1.00x  DEGRADED  -
zroot    928G   335G   593G        -         -     1%    36%  1.00x    ONLINE  -
root@hosta:/usr/home/oper # zpool status orig
  pool: orig
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
config:

    NAME        STATE     READ WRITE CKSUM
    orig        DEGRADED     0     0     0
      raidz1-0  DEGRADED     0     0     0
        md1     ONLINE       0     0     0
        md2     OFFLINE      0     0     0
        md3     ONLINE       0     0     0
        md4     ONLINE       0     0     0

errors: No known data errors

root@hosta:/usr/home/oper # zfs send -R orig@prebackup | zfs receive -F newone

root@hosta:/usr/home/oper # zfs list -r newone
NAME          USED  AVAIL     REFER  MOUNTPOINT
newone        256K  95.8M       25K  /newone
newone/a       24K  95.8M       24K  /newone/a
newone/h       48K  95.8M       24K  /newone/h
newone/h/dd    24K  95.8M       24K  /newone/h/dd
root@hosta:/usr/home/oper # zfs list -r newone -t snapshot
NAME                  USED  AVAIL     REFER  MOUNTPOINT
newone@preback          0B      -       25K  -
newone/a@preback        0B      -       24K  -
newone/h@preback        0B      -       24K  -
newone/h/dd@preback     0B      -       24K  -

root@hosta:/usr/home/oper # zpool destroy orig

root@hosta:/usr/home/oper # zpool attach -f newone md2 md1

root@hosta:/usr/home/oper # zpool status newone
  pool: newone
 state: ONLINE
  scan: resilvered 468K in 00:00:09 with 0 errors on Thu Dec 23 23:33:58 2021
config:

    NAME        STATE     READ WRITE CKSUM
    newone      ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        md2     ONLINE       0     0     0
        md1     ONLINE       0     0     0

errors: No known data errors
Thank you so very much!!
 
if you choose this path be aware that you will end up with a pool with a new name which might not be ideal
because some utilities (bectl?) might expect the default zroot naming convention
you can still boot from external media and export the pool and import it with a new name (zroot)
im not sure how important it is to have the rootfs pool named zroot but just in case
 
...
2) Multi-booting with Windows: you dont' want to install it on the 1st drive, ...
OK, to be more precise, at the time of installation, it will HAVE to be the 1st drive, of course :) :) :) But after that it can be whatever drive you want it to be, but NOT the 1st... Although modern BIOS gives you choice of what your 1st boot device will be, of course.
 
Back
Top