During an upgrade from 11.1 to 11.2, upon first reboot, I get the following error:
I have tried the following suggestions found on line and none of them worked.
1 - Copy over the boot loader code again after booting off live cd option with the memstick image
2 - Try copy over the boot directory again for "self-healing" after booting off live cd option with memstick image
From this post (https://forums.freebsd.org/threads/...m-zroot-after-applying-p25.54422/#post-308876) it appears that the only way out, if you have a striped zpool across vdevs that have a different partition structure for their devices, that is used for zfs root you are basically screwed?
I don't have enough spare capacity to over the data, rebuild the pool and restore. So my questions are: (please note I am not a freebsd nor zfs expert so apologies if my questions are conceptually confused.)
1) Is this really the only way?
2) Could I remove the 2nd vdev that has the raw disk devices and try and force all the data to be written to the first vdev? The I could re-partition the 2 raw disks and add them back as a similarly formatted vdev? I have read from several, low quality sources, that its not possible to remove a vdev. Can I remove all the disks in a vdev instead and then add them back to the vdev? If space is an issue and not all data can be copied over to the first vdev, which is what I suspect, can I remove the raw devices from the 2nd mirror vdev one at a time, re-partition then and add then back in one at a time to the 2nd mirrored vdev? Wold any of these approaches actually solve the problem?
To clarify:
My zpool has two mirror vdevs. The first has two disks which have been partitioned with boot partitions, swap partitions and then a partition dedicated to ZFS vdev1. The 2nd vdev is also mirrored but the 2 entire disks have been added to the vdev.
Code:
ZFS: i/o error - all block copies unavailable
/boot/kernel/kernel text=0x1547d28 ZFS: i/o error - all block copies unavailable
elf64_loadimage: read failed
can't load 'kernel'
I have tried the following suggestions found on line and none of them worked.
1 - Copy over the boot loader code again after booting off live cd option with the memstick image
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 adao
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
2 - Try copy over the boot directory again for "self-healing" after booting off live cd option with memstick image
zpool import -f -R /mnt zroot
mv /mnt/boot /mnt/boot.orig
mkdir /mnt/boot
cd /mnt/boot.orig && cp -R * /mnt/boot
From this post (https://forums.freebsd.org/threads/...m-zroot-after-applying-p25.54422/#post-308876) it appears that the only way out, if you have a striped zpool across vdevs that have a different partition structure for their devices, that is used for zfs root you are basically screwed?
I don't have enough spare capacity to over the data, rebuild the pool and restore. So my questions are: (please note I am not a freebsd nor zfs expert so apologies if my questions are conceptually confused.)
1) Is this really the only way?
2) Could I remove the 2nd vdev that has the raw disk devices and try and force all the data to be written to the first vdev? The I could re-partition the 2 raw disks and add them back as a similarly formatted vdev? I have read from several, low quality sources, that its not possible to remove a vdev. Can I remove all the disks in a vdev instead and then add them back to the vdev? If space is an issue and not all data can be copied over to the first vdev, which is what I suspect, can I remove the raw devices from the 2nd mirror vdev one at a time, re-partition then and add then back in one at a time to the 2nd mirrored vdev? Wold any of these approaches actually solve the problem?
To clarify:
My zpool has two mirror vdevs. The first has two disks which have been partitioned with boot partitions, swap partitions and then a partition dedicated to ZFS vdev1. The 2nd vdev is also mirrored but the 2 entire disks have been added to the vdev.