Can you describe it in more detail?I've installed freebsd on a raid 1+0 of 6 HDD and then created a mirrored special zfs device on 2 SSD disks. The creation is OK and no error appear. But when I reboot, it doesn't boot on disks any more. I wasn't able to catch the error message. Any idea of what could be the issue?
zpool status
before boot, what do you see? Post it here. gpart show
. What is the output?# zpool status
pool: zroot
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da0p4 ONLINE 0 0 0
da1p4 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
da2p4 ONLINE 0 0 0
da3p4 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
da4p4 ONLINE 0 0 0
da5p4 ONLINE 0 0 0
special
mirror-3 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
gpart show
=> 40 7814037088 da0 GPT (3.6T)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 4194304 3 freebsd-swap (2.0G)
4728832 7809306624 4 freebsd-zfs (3.6T)
7814035456 1672 - free - (836K)
=> 40 7814037088 da1 GPT (3.6T)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 4194304 3 freebsd-swap (2.0G)
4728832 7809306624 4 freebsd-zfs (3.6T)
7814035456 1672 - free - (836K)
=> 40 7814037088 da2 GPT (3.6T)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 4194304 3 freebsd-swap (2.0G)
4728832 7809306624 4 freebsd-zfs (3.6T)
7814035456 1672 - free - (836K)
=> 40 7814037088 da3 GPT (3.6T)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 4194304 3 freebsd-swap (2.0G)
4728832 7809306624 4 freebsd-zfs (3.6T)
7814035456 1672 - free - (836K)
=> 40 7814037088 da4 GPT (3.6T)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 4194304 3 freebsd-swap (2.0G)
4728832 7809306624 4 freebsd-zfs (3.6T)
7814035456 1672 - free - (836K)
=> 40 7814037088 da5 GPT (3.6T)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 4194304 3 freebsd-swap (2.0G)
4728832 7809306624 4 freebsd-zfs (3.6T)
7814035456 1672 - free - (836K)
=> 3 2045011 da9 GPT (15G) [CORRUPT]
3 26 1 freebsd-boot (13K)
29 51 - free - (26K)
80 4096 2 efi (2.0M)
4176 2040838 - free - (997M)
=> 3 2045011 iso9660/13_1_RELEASE_AMD64_CD GPT (15G) [CORRUPT]
3 26 1 freebsd-boot (13K)
29 51 - free - (26K)
80 4096 2 efi (2.0M)
4176 2040838 - free - (997M)
What command(s) did you use to create this?[..] created a mirrored special zfs device on 2 SSD disks.
I used the following command line:What command(s) did you use to create this?
zpool add zroot special mirror da6 da7
looking at t he result of the gpart show command it seems it is . I am thinking the metadata mirror is not available when it boots but maybe I'm wrong?I'm not an expert in this area, but a few things to look at while you wait for others to come online...
What is your BIOS boot order?
Have you installed UEFI bootcode onto all the zroot VDEVs?
without creating the special device it books without any error, so yes it's bootable. This is not booting once the special device is created.The "raid 1+0 of 6 HDDs" is that a ZFS raid configuration or is it a physical RAID controller in the system?
Is raid 1+0 of 6 hdds "a stripe of mirrors"? If so, is that even a bootable combination?
It would be interesting to know if there's a difference* between (noted: you tested on a VM):However, when the disks were added as whole disks and removed from the pool, created partitions and added again to the pool, [...] a boot loop occurs.
zpool list -v
Good guess, it doesn't make any difference. Beside those test settings I tried from a Live CD, the result is always the same as described in post # 11.It would be interesting to know if there's a difference* between (noted: you tested on a VM):
___
- when the disks were added as whole disks and removed from the pool <no re-boot>, created partitions and added again to the pool
- when the disks were added as whole disks and removed from the pool <re-boot>, created partitions and added again to the pool
* my guess: no difference