ZFS all block copies unavailable try-include not found

Good day everyone, I shutdown my FreeBSD server for a few hours and booted it and it would not boot.

I was greeted by this error
Code:
FreeBSD/x86 ZFS enabled bootstrap loader, Revision 1.1
(Fri Jul 21 02:03:14 UTC 2017 root@releng2.nyi.freebsd.org)
ZFS: i/o error - all block copies unavailable
try-include not found
|
ZFS: i/o error - all block copies unavailable
Multiboot checksum failed, magic: 0x1badb002 flags: 0x83423c74 checksum: 0xc23904c1
ZFS: i/o error - all block copies unavailable
ZFS: i/o error - all block copies unavailable
ZFS: i/o error - all block copies unavailable
ZFS: i/o error - all block copies unavailable
can't load 'kernel'

Type '?' for a list of commands, 'help' for more detailed help.
OK _

I tried using FreeBSD User: pboehmer thread fix, but to my dismay it didn't work.
Code:
boot usb drive
select Live CD

mount -u /
zpool import -o cachefile=/var/tmp/zpool.cache -f -R /mnt zroot
zfs umount -af
zfs set mountpoint=/ zroot
zfs mount -a

cp -rpv /boot/* /mnt/boot
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid0
cp /var/tmp/zpool.cache /mnt/boot/zfs

zfs umount -af
zfs set mountpoint=legacy zroot
zpool export zroot
shutdown -r now

I put the bootcode on all my drives

I can mount the pool using FreeBSD's Live CD and the pool is online, not degraded.

My zpool consists of 3 raidz3 of 7 hard drives.

Code:
zroot
    raidz3-0
        ada0p4
        ada2p4
        ada1p4
        da0p4
        da1p4
        da2p4
        da3p4
    raidz3-1
        ada5
        ada3
        ada4
        da4
        da5
        da6
        da7
    raidz3-2
        da8
        da9
        da10
        da11
        da12
        da13
        da14
errors: No known data errors

I use FreeBSD 11.1-RELEASE and all my data seems to be intact.
Can someone help me make it bootable again?

I also tried FreeBSD user: frijsdijk code it also did not work for me.
Code:
# mkdir /tmp/mnt
# zpool import -R /tmp/mnt -f zroot
# cd /tmp/mnt
# mv boot boot.orig
# mkdir boot
# cd boot.orig
# cp -Rp * /tmp/mnt/boot
# zpool export
# reboot
 
First of all: those rescue methodologies look quite bizarre to me and I would even recommend not using it like that because there's no need. You don't need to have your setup available as / before you can bootstrap it. Heck, doing it like that can only cause more problems with the underlying system that's still active. Theoretically this can go haywire.

Also: you don't necessarily need a read-write environment before you can mount your stuff.

Just boot your rescue environment, then:
  1. zpool import, this will load the ZFS drivers and then show the currently available ZFS pool(s).
  2. # zpool import -fR /mnt zroot, this will make your pool available within your current environment.
If you automatically installed your ZFS pool using the installer then you'll face a small problem because some of your ZFS filesystems, such as the root, won't be automatically mounted. This is highly inconvenient (and something I personally consider utterly stupid) but that's the way it is. You can however still check your filesystems using zfs list and mount one which you need through # zfs mount.

After doing this your bootcode will be available in /mnt/boot from which you can access and bootstrap it.

Then another thing I don't understand, when bootstrapping you used mfid0. What kind of device is that?

It looks odd to me considering that all your other devices are ada and da based, so where did this come from?

Could you share the output from gpart list so that we can get an impression of what we're dealing with here?

(edit)

More important questions:
  • What FreeBSD version are you using?
  • When you're in the boot menu (the ok prompt) what does lsdev tell you?
 
Hi ShelLuser thank you, I imported my pool using the commands you gave me and used gpart list

Code:
    label:gptboot5
    length: 524288
    offset: 209735680
    type: freebsd-boot
    index: 2
    end: 410663
    start: 409640
3.  Name: da2p3
    Mediasize: 2147483648 (2.0G)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 210763776
    Mode: r0w0e0
    rawuuid: 767feca7-76e5-11e7-ab25-0cc47aab18e0
    rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
    label: swap5
    length: 2147483648
    offset: 210763776
    type: freebsd-swap
    index: 3
    end 4605951
    start: 411648
4.  Name: da2p4
    Mediasize: 2998234251264 (2.7T)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 2358247424
    Mode: r1w1e1
    rawuuid: 772276a8-76e5-11e7-ab25-0cc47aab18e0
    rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
    label: zfs5
    length: 2998234251264
    offset: 2358247424
    type: freebsd-zfs
    index: 4
    end: 5860532223
    start: 4605952
Consumers:
1.    Name: da2
    Mediasize: 3000592982016 (2.7T)
    Sectorsize: 512
    Mode: r1w1e2

Geom name: da3
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 5860533127
first: 40
entries: 152
scheme: GPT
Providers:
1.     Name: da3p1
    Mediasize: 209715200 (200M)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 20480
    Mode: r0w0e0
    rawuuid: 78bfbe36-76e5-11e7-ab25-0cc47aab18e0
    rawtype: c12a7328-f81f-11d2-ba4b-00a0c93ec93b
    label: efiboot6
    length: 209715200
    offset: 20480
    type: efi
    index: 1
    end: 409639
    start: 40
2.  Name: da3p2
    Mediasize: 524288 (512K)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 209735680
    Mode: r0w0e0
    rawuuid: 794d2df3-76e5-11e7-ab25-0cc47aab18e0
    rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
    label: gptboot6
    length: 524288
    offset: 209735680
    type: freebsd-boot
    index: 2
    end: 410663
    start: 409640
3.  Name: da3p3
    Mediasize: 2147483648 (2.0G)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 210763776
    Mode: r0w0e0
    rawuuid: 7a0ce51b-76e5-11e7-ab25-0cc47aab18e0
    rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
    label: swap6
    length: 2147483648
    offset: 210763776
    type: freebsd-swap
    index: 3
    end: 4605951
    start: 411648
4.  Name: da3p4
    Mediasize: 2998234251264 (2.7T)
    Sectorsize: 512
    Stripesize: 0
    Strpeoffset: 2358247424
    Mode: r1w1e1
    rawuuid: 7aacaa78-76e5-11e7-ab25-0cc47aab18e0
    rawtype: 516e7cba-6ecf-11d6-8ff8-00023d09712b
    label:zfs6
    length: 2998234251264
    offset: 2358247424
    type: freebsd-zfs
    index: 4
    end: 5860532223
    start: 4605952
Consumers:
1.  Name: da3
    Mediasize: 3000592982016 (2.7T)
    Sectorsize: 512
    Mode: r1w1e2
    
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 5860533127
first: 40
entries: 152
scheme: GPT
Providers:
1.  Name: ada0p1
    Mediasize: 209715200 (200M)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 20480
    Mode: r0w0e0
    rawuuid: 6b868a03-76e5-11e7-ab25-0cc47aab18e0
    rawtype: c12a7328-f81f-11d2-ba4b-00a0c93ec93b
    label: efiboot0
    length: 209715200
    offset: 20480
    type: efi
    index: 1
    end: 409639
    start: 40
2.  Name: ada0p2
    Mediasize: 524288 (512K)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 209735680
    Mode: r0w0e0
    rawuuid: 6ba9caf0-76e5-11e7-ab25-0cc47aab18e0
    rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
    label: gptboot0
    length: 524288
    offset: 209735680
    type: freebsd-boot
    index: 2
    end: 410663
    start: 409640
3.  Name: ada0p3
    Mediasize: 2147483648 (2.0G)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 210763776
    Mode: r0w0e0
    rawuuid: 6bcc89ce-76e5-11e7-ab25-0cc47aab18e0
    rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
    label: swap0
    length: 2147483648
    offset: 210763776
    type: freebsd-swap
    index: 3
    end: 4605951
    start: 411648
4.  Name: ada0p4
    Mediasize: 2998234251264 (2.7T)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 2358247424
    Mode: r1w1e1
    rawuuid: 6be95f12-76e5-11e7-ab25-0cc47aab18e0
    rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
    label: zfs0
    length: 2998234251264
    offset: 2358247424
    type: freebsd-zfs
    index: 4
    end: 5860532223
    start: 4605952
Consumers:
1.  Name: ada0
    Mediasize: 3000592982016 (2.7T)
    Sectorsize: 512
    Mode: r1w1e2
    
Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors 63
last: 5860533127
first: 40
entries 152
scheme: GPT
Providers:
1.     Name ada1p1
    Mediasize: 209715200 (200M)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 20480
    Mode:r0w0e0
    rawuuid: 6cf97b5b-76e5-11e7-ab25-0cc47aab18e0
    rawtype: c12a7328-f81f-11d2-ba4b-00a0c93ec93b
    label: efiboot2
    length: 209715200
    offset: 20480
    type: efi
    index: 1
    end 409639
    start: 40
2.  Name: ada1p2
    Mediasize: 524288 (512K)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 209735680
    Mode: r0w0e0
    rawuuid: 6d1ff4f8-76e5-11e7-ab25-0cc47aab18e0
    rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
    label: gptboot2
    length: 209735680
    type: freebsd-boot
    index: 2
    end: 410663
    start: 409640
3.  Name: ada1p3
    Mediasize: 2147483648 (2.0G)
    Sectorsize: 512
    Stripesize: 0
    Stripeoffset: 210763776
    Mode: r0w0e0

I can't view the topmost files on my screen I tried but thats the only maximum amount of text I can see on my monitor, I used Pause Break, and Page up on the keyboard.

Thank you

- Vincent
 
Two questions left unanswered though: where does mfid0 come from? And, less important at this time, what FreeBSD version are we dealing with here?

I did a bit of research and Google searching and found this guide:

https://forums.freebsd.org/threads/howto-freebsd-8-1-geli-zfs-large-disks.22602/

... which leads me up to believe that you've used a hardware raid setup to install ZFS onto, am I right?

If so then that is very bad setup and I can't help but speculate that it could be one of the reasons for your problems. Do not install ZFS onto a hardware raid. Especially not if you also let ZFS perform raid actions of its own. That will seriously mess up because you will suffer drastically in both performance as well as data security / consistency.

That bootstrap code you mentioned above? Wrong.

# gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 2 da2
# gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 2 da3

Note: based on the gpart output above. And you can repeat this for every HD which has a FreeBSD boot partition (which I don't understand: why so many?).

...at best but that was before I noticed that you're using EFI and that's something I'm not fully familiar with. Other than /boot/boot1.efifat. See also this wiki page.

Bottom line though:
  • Do not install ZFS onto a hardware raid.
  • Why so many FreeBSD boot partitions if you're only going to use a few to boot from (heck: why even if you use efid0)?
  • If you need network support on a rescue environment fire up /etc/netstart.
  • It might be better for performance / redundancy if you include your swap within your ZFS pool.
Hope this can help.
 
Version: FreeBSD-11.1-RELEASE-amd64-memstick.img
the mfid0 is a type of raid or hba card from what I know, I substituted mfid0 to my respected drives, ada0, ada1, ada2, ada3, ada4, ada5, da0, da1, da2, da3 ,da4, da5, da6 so on and so forth.

The card I use is a 9211-8i flashed to IT mode.

So what I do is boot into Live CD, and import the pool, and enter the code?

Code:
# gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 2 da2
# gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 2 da3

And repeat it for all of my drives? I remember some of my drives wouldn't let me write the bootcode is this normal?

Can I also add swap on my existing pool? Or do I have to create a new pool to create the swap?

I grew my pool slowly from 1 raidz3 (7drive) pool to 3 raidz3 (7drive) pool, I think adding the disk as simple as zpool add zroot raidz3 da1 da2 da3 da4 da5 da6 da7

was my mistake, I should have added gpart code for zfs before adding it to my existing pool.
 
Version: FreeBSD-11.1-RELEASE-amd64-memstick.img
the mfid0 is a type of raid or hba card from what I know, I substituted mfid0 to my respected drives, ada0, ada1, ada2, ada3, ada4, ada5, da0, da1, da2, da3 ,da4, da5, da6 so on and so forth.
So mfid0 does not apply to your situation then? Ok, that's one (virtual) problem taken care off.

So what I do is boot into Live CD, and import the pool, and enter the code?
Only if applicable of course, so only if the drive actually has boot capabilities (which is usually determined by the existence of a freebsd-boot type partition). One important aspect here: be sure to use the boot code from your installed OS and not that from the live cd.


That's why I used /mnt/boot for example. This is to make sure that the bootcode version is fully on-par with your current ZFS version. Sometimes even ZFS gets an upgrade and when it does then in most cases the bootcode also needs to be re-installed, which of course means that the versions should obviously match (older bootcode can theoretically have problems with a newer ZFS pool).

Hope this can help.
 
I tried putting the bootcode on all of my drives, since I found out that when you expand your pool "Some BIOSes can shift boot order for some reason" from another site.

I started from ada0, when I got to ada3 there was a problem.
Code:
root@:/mnt # gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 2 ada3
gpart: No such geom: ada3
Can I create a bootcode for ada3 without damaging the existing pool?
 
I started from ada0, when I got to ada3 there was a problem.
Yeah, that is a little peculiar, this is also shown in your previous share of the gpart output; ada3 isn't shown although they're still part of your ZFS pool.

This is why I mentioned "if applicable". It's not fully uncommon for a drive to be added to a ZFS pool in a "raw state" (that's what I like to call it). In other words: the drive is added fully as-is, so without applying a partitioning scheme or any of that. Obviously this also means that you won't be able to bootstrap it nor will you be able to boot from it. So trying to bootstrap them is pointless.
 
Then why mark the thread solved? I'm a little confused.

If you're in the bootmenu (after the error messages) does the lsdev command show you anything useful (used on the 'ok' prompt)?

Also: which HD are you using to boot from? I assume that you configured something in the BIOS or in another system setting?
 
You can't add the bootcode to drives that don't have partitions.

Your first set of disks are partitioned, with p4 being used for ZFS. Those drives have boot partitions where you can install the boot code.

The rest of your disks are added to the ZFS pool as raw devices, meaning that are no partitions, there are no partition tables, and there's nowhere to copy the boot code to. You cannot boot from these drives! If your BIOS gets confused and re-numbers the drives in the system such that one of these drives is listed first, you will not be able to boot.

This is one of the reasons why using partitions on drives for ZFS is handy. The other being that you can use GPT labels for the partitions and add the label devices to your pool for better visibility into the hardware setup. And, if you need to add a boot partition down the road, or increase the size of it, you can do so (provided you created the first partition with -a 1M so that it starts at the 1 MB mark on the disk).

I'd go into your BIOS and double-check the boot setup to make sure the correct drives are being listed as "bootable" and in the correct order.
 
You can't add the bootcode to drives that don't have partitions.

Your first set of disks are partitioned, with p4 being used for ZFS. Those drives have boot partitions where you can install the boot code.

The rest of your disks are added to the ZFS pool as raw devices, meaning that are no partitions, there are no partition tables, and there's nowhere to copy the boot code to. You cannot boot from these drives! If your BIOS gets confused and re-numbers the drives in the system such that one of these drives is listed first, you will not be able to boot.

This is one of the reasons why using partitions on drives for ZFS is handy. The other being that you can use GPT labels for the partitions and add the label devices to your pool for better visibility into the hardware setup. And, if you need to add a boot partition down the road, or increase the size of it, you can do so (provided you created the first partition with -a 1M so that it starts at the 1 MB mark on the disk).

I'd go into your BIOS and double-check the boot setup to make sure the correct drives are being listed as "bootable" and in the correct order.

Thank you phoenix! I will try that when I get home, in the supermicro micro bios, I only listed 1 bootable drive, ada0 and disabled the rest, that does make a difference on how zfs see the other drives as unavailable?
 
Can I have a new FreeBSD zfs installation for boot-only, and use my old pool (which contains all my files and data) to run my old installed pkg and services? I just had that thought but I don't know if it is possible, my new FreeBSD zfs installation is a ssd 80gb only for booting.
 
Can I have a new FreeBSD zfs installation for boot-only, and use my old pool (which contains all my files and data) to run my old installed pkg and services?
That is definitely possible. Just keep in mind that there might be a little more overhead where required resources (memory) is concerned but you should be fine in the overall. Oh: and be sure to rename your current pool so that you don't get any overlap. You can use the # zpool import command for that.

For example: # zpool import zroot zdata would import the pool zroot but rename it into zdata.

Hope this can help.
 
I’m sorry for all the trouble, thank you for helping me. Do I have to change any mountpoints? When I import my zdata pool it collides wih my new installation of FreeBSD,failed to initialize zfs library, should I just wait? Thank you! :)
 
Do I have to change any mountpoints?
Definitely. I mean, if this is your previous zroot then it will have at least one overlapping mountpoint which will be /, and that wouldn't go too well.

So you definitely need to start by getting all of that out of the way and to ensure that things don't overlap. Also: if it's only booting your after you could also consider just using UFS, especially if you're using only one HD for this.
 
Back
Top