ZFS ZFS special device on zroot

I've installed freebsd on a raid 1+0 of 6 HDD and then created a mirrored special zfs device on 2 SSD disks. The creation is OK and no error appear. But when I reboot, it doesn't boot on disks any more. I wasn't able to catch the error message. Any idea of what could be the issue?
 
I've installed freebsd on a raid 1+0 of 6 HDD and then created a mirrored special zfs device on 2 SSD disks. The creation is OK and no error appear. But when I reboot, it doesn't boot on disks any more. I wasn't able to catch the error message. Any idea of what could be the issue?
Can you describe it in more detail?
What is the boot method?
What is the exact configuration?
If you run zpool status before boot, what do you see? Post it here.
Run gpart show. What is the output?
 
The "raid 1+0 of 6 HDDs" is that a ZFS raid configuration or is it a physical RAID controller in the system?
Is raid 1+0 of 6 hdds "a stripe of mirrors"? If so, is that even a bootable combination?
 
It boot over UEFI. Here are the following details, hope it helps. This happen right after having the special disk created and a reboot.

zpool status:

Code:
# zpool status
  pool: zroot
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        da0p4   ONLINE       0     0     0
        da1p4   ONLINE       0     0     0
      mirror-1  ONLINE       0     0     0
        da2p4   ONLINE       0     0     0
        da3p4   ONLINE       0     0     0
      mirror-2  ONLINE       0     0     0
        da4p4   ONLINE       0     0     0
        da5p4   ONLINE       0     0     0
    special
      mirror-3  ONLINE       0     0     0
        da6     ONLINE       0     0     0
        da7     ONLINE       0     0     0

gpart show:

Code:
gpart show
=>        40  7814037088  da0  GPT  (3.6T)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832  7809306624    4  freebsd-zfs  (3.6T)
  7814035456        1672       - free -  (836K)

=>        40  7814037088  da1  GPT  (3.6T)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832  7809306624    4  freebsd-zfs  (3.6T)
  7814035456        1672       - free -  (836K)

=>        40  7814037088  da2  GPT  (3.6T)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832  7809306624    4  freebsd-zfs  (3.6T)
  7814035456        1672       - free -  (836K)

=>        40  7814037088  da3  GPT  (3.6T)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832  7809306624    4  freebsd-zfs  (3.6T)
  7814035456        1672       - free -  (836K)

=>        40  7814037088  da4  GPT  (3.6T)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832  7809306624    4  freebsd-zfs  (3.6T)
  7814035456        1672       - free -  (836K)

=>        40  7814037088  da5  GPT  (3.6T)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832  7809306624    4  freebsd-zfs  (3.6T)
  7814035456        1672       - free -  (836K)

=>      3  2045011  da9  GPT  (15G) [CORRUPT]
        3       26    1  freebsd-boot  (13K)
       29       51       - free -  (26K)
       80     4096    2  efi  (2.0M)
     4176  2040838       - free -  (997M)

=>      3  2045011  iso9660/13_1_RELEASE_AMD64_CD  GPT  (15G) [CORRUPT]
        3       26                              1  freebsd-boot  (13K)
       29       51                                 - free -  (26K)
       80     4096                              2  efi  (2.0M)
     4176  2040838                                 - free -  (997M)
I just made a screen capture of the error in ILO:

Screenshot 2022-11-12 at 21.34.05.png
 
Last edited by a moderator:
I'm not an expert in this area, but a few things to look at while you wait for others to come online...

What is your BIOS boot order?

Have you installed UEFI bootcode onto all the zroot VDEVs?
 
I'm not an expert in this area, but a few things to look at while you wait for others to come online...

What is your BIOS boot order?

Have you installed UEFI bootcode onto all the zroot VDEVs?
looking at t he result of the gpart show command it seems it is . I am thinking the metadata mirror is not available when it boots but maybe I'm wrong?
 
The "raid 1+0 of 6 HDDs" is that a ZFS raid configuration or is it a physical RAID controller in the system?
Is raid 1+0 of 6 hdds "a stripe of mirrors"? If so, is that even a bootable combination?
without creating the special device it books without any error, so yes it's bootable. This is not booting once the special device is created.
 
All VDEVS in a pool (normal and special) should use the same ashift. You should check that.

The disks used for your zroot special devices need to be visible to the BIOS.

Putting all the metadata (and potentially small files) on a special device complicates the data structures of the zroot, and thus the boot process. To me, that's a risk that needs to be managed.

My very strong instinct is to keep the zroot small and on separate media, and put the bulk data in a separate pool (where I am quite sure that special devices work).
 
I did some testing in a VM (VirtualBox). The test system is 13.1-RELEASE-p3, UEFI, the storage controller is LsiLogic SAS.

I could reproduce the issue. Apparently the problem is with the special mirror devices da6 and da7 when added as whole disks, having no partition tables

When two disks are added into the pool of the test system after installing the system (raid1+0, 6 disks) as special mirror with created partition tables and freebsd-zfs partitions on da6/7 the system boots normally.

However, when the disks were added as whole disks and removed from the pool, created partitions and added again to the pool, the system boots but immediately or shortly after enumerating the disks it reboots and a boot loop occurs.

I can't tell if this a general occurrence or has something to do the system being a VM. You should check on your system.
 
However, when the disks were added as whole disks and removed from the pool, created partitions and added again to the pool, [...] a boot loop occurs.
It would be interesting to know if there's a difference* between (noted: you tested on a VM):
  1. when the disks were added as whole disks and removed from the pool <no re-boot>, created partitions and added again to the pool
  2. when the disks were added as whole disks and removed from the pool <re-boot>, created partitions and added again to the pool
___
* my guess: no difference
 
What's the output of:
Code:
zpool list -v
How did you install the bootloader boot0,boot1 ?
What's currdev & vfs.root.mountfrom in loader.conf ?
 
It would be interesting to know if there's a difference* between (noted: you tested on a VM):
  1. when the disks were added as whole disks and removed from the pool <no re-boot>, created partitions and added again to the pool
  2. when the disks were added as whole disks and removed from the pool <re-boot>, created partitions and added again to the pool
___
* my guess: no difference
Good guess, it doesn't make any difference. Beside those test settings I tried from a Live CD, the result is always the same as described in post # 11.
 
Back
Top