• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

Cannot boot

stream

New Member

Thanks: 1
Messages: 11

#1
Hi,

Am a relative newbie. I had FreeBSD-11 on my desktop, with two zfs pools: a 2-way mirrored on (2) SSD, and one for un-mirrored stripe single nvme-ssd.

Things were generally ok until now. Recently, I made some changes to the loader.conf file to get IPMI to work in FreeBSD, following the instructions in https://polytarian.com/2016/09/21/setup-ipmi-in-freebsd-10/.

When I rebooted, the boot failed. FreeBSD goes into mountroot> prompt. It complains that the disks are corrupt.
I tried various things:

1) in the mountroot, I typed zfs:/dev/ada0p2, it says error 2: filesystem not found. Also tried other partitions, same error.

2) went into loader prompt, tried to unset the variable ipmi_load, and updated ipmi variables in device.hints. Unfortunately, loader.conf show does not list any of the ipmi variables.

3) went into live cd, and tried this (as was suggested in: https://serverfault.com/questions/8...-zfs-from-live-cd-and-find-the-root-partition).
geli attach /dev/.. That didn't work either- says metadata not found

So basically, am stuck with no options. Can you please a let me know what I can try. I don't mind re-installing freebsd if that is the only option. If that is the only option, is it possible to create a restore option? so in the future if the system cannot boot, I can restore from a clean copy in the past. How to do this?

Appreciate your help. Thank you in advance.
 

ShelLuser

Daemon

Thanks: 800
Messages: 2,008

#2
What does lsdev tell you at the loader prompt (the ok> prompt)?

Also: what does GELI have to do with anything? Did you encrypt your stuff?

Can't help get the feeling that you're following instructions without being sure if those actually apply to your situation. And that would be a bad thing.

So what happens when you boot your live cd, then issue # zpool import, what does that tell you?

Also, can you list the output of gpart show?
 

stream

New Member

Thanks: 1
Messages: 11

#3
Thank you for the suggestions. appreciate your help.

Below, I've typed up the output from all the commands you suggested.

1)
lsdev:
part devices:
part0: 409600 blocks
part1: 1024 blocks
part2: 16777216 blocks
net devices:
zfs devices:
zfs:mypool

2)
geli: no I didnt encrypt. yes, I simply tried this option as someone posted this as a way to attach devices in livecd mode, and edit the corrupted /boot/loader.conf

3)
zpool import
pool: mypool
id:....
state: online
action: the pool can be imported using its name or numeric identifier
config:
mypool ONLINE
mirror-0 ONLINE
ada0p4 ONLINE
ada1p4 ONLINE

pool: mzpool
id: ....
state ONLINE
status: the pool was last accessed by another system
action: the pool can be imported using its name or numeric identifier and the -f flag
see: http://illumos.org/msg/ZFS-8000-EY
config:
mzpool ONLINE
nvd0 ONLINE

4) gpart show
showing all the disks info- in particular it says nvd0 is corrupt.

=>nvd0 GPT (267G) [CORRUPT]
1 efi
2 freebsd-boot (512)
3 freebsd-swap
4 freebsd-zfs

=>diskid/DISK,,,, [CORRUPT]
same as nvd0

=>ada0 GPT (256G)
1 efi
2 freebsd-boot
3 freebsd-swap
4 freebsd=zfs

=>diskid/DISK-......
same as ada0

=>ada1 GPT (256G)
same as ada0

=>diskid/DISK ...
same as ada1

=>da0 GPT (971M)
flash drive info

=>diskid/DISK ...
same as da0


-
 

ShelLuser

Daemon

Thanks: 800
Messages: 2,008

#4
Those corrupt disks are most certainly a problem, it might be possible to fix those using gpart but one should always be careful. So let's focus ourselves first on trying to access your data so that you can back it up if you want to. Looking back I should also have asked you a zpool list output but let's see how far we can take this.

It looks like you have 2 ZFS pools called mypool and mzpool and you're normally booting from the first. The good news is that the pools are still recognized by the system. So with a little luck you should still be able to fully access all your stuff (it's usually a good sign that something "bad" happened when lsdev doesn't show any ZFS devices any longer).

Boot using the live CD ("disc1"), log on, and then try this (these are all commands, I'm not going to bother with [cmd] here (yes, I am lazy sometimes :p )):
  • mkdir /tmp/mypool
  • mkdir /tmp/mzpool
  • zpool import
  • zpool import -fR /tmp/mypool mypool
  • zpool import -fR /tmp/mzpool mzpool
My assumption is that once you followed these commands then zfs list will show you all your filesystems and you should be able to fully access them through /tmp. If for some reason this doesn't work and you're getting weird error messages about corruption or who knows what then try this instead:
  • Use all the commands shown above except the last two 'import' commands
  • zpool import -o readonly=on -fNR /tmp/mypool mypool
This would import your ZFS pool but readonly and without mounting any filesystems (so you'd see them being available using zfs list but you won't be able to access them physically just yet). If this command does succeed you can then follow up using zfs mount -a after which you should be able to access your stuff again through the /tmp directory.

Note: in the rest of this message I'm assuming that you mounted the system without any errors.

So this would be a good time to back up your data if you have to. Look into zfs send (see zfs(8)) and/or tar(1). Optionally with ssh(1). Small sidestep: # zfs send zroot/home | ssh peter@myhost "dd of=/opt/backups/server_home.zfs". This would create a ZFS dump which is then being sent to a remote server using SSH where the data stream gets processed by dd in order to actually store the file.

Now for your booting problem...

I noticed that you're using EFI and that changes a few things for me because I'm not familiar with that process yet.

So what I suggest for now is to look into /tmp/mypool/boot/loader.conf and check for any 'weird' changes you made. If you made recent changes after which the system didn't boot any more: remove them. Note that you can use # to comment something out. Basically, the only thing which I would expect in there is this: zfs_load="YES", optionally followed by some statements required to boot using UEFI (but that's the part I'm not sure of).

Once you got all of this out of the way you could look into using gpart recover (see gpart(8)) to see if that could somehow fix your corrupted partition scheme.

Hope this can help!
 

stream

New Member

Thanks: 1
Messages: 11

#5
Thanks again. This is great help. Even though am stuck- I gotta love FreeBSD, the forum support is fantastic.

To your question:
"zfs list" does not show anything.

I did get the pools through:
  • mkdir /tmp/mypool
  • mkdir /tmp/mzpool
  • zpool import
  • zpool import -fR /tmp/mypool mypool
  • zpool import -fR /tmp/mzpool mzpool
And then zfs mount -a

Unfortunately, I don't see the boot folder or loder.conf anywhere in the /tmp and sub directories.

/tmp/mypool contains the following folders:
tmp usr var mypool

Also, zfs list now has more info
but has no mountpoint for ROOT, but does have my /usr/home etc. Perhaps the reason why I can't access loader.conf?
....
mypool/ROOT # # # none
mypool/ROOT/default # # # /tmp/mypool
.....
 

stream

New Member

Thanks: 1
Messages: 11

#6
Finally, got it working :) Thanks you for the suggestions.

I re went to the link in the OP. (https://serverfault.com/questions/8...-zfs-from-live-cd-and-find-the-root-partition)
As described, apparently we have to re run the command to load boot: (e.g zfs mount zroot/ROOT/default)

With that, I am able to go to loader.conf; deleted all the entries I put in to get ipm. Got my machine to boot properly. Whew. what a relief!!

Thank you again. so much for your help.