updated 9-STABLE with boot issues?

Looks like some added functionality has made my system start to recognize some old mirror that was set up on the system some time ago on the last motherboard it was on.

Upon booting the new kernel I'm getting things like
Code:
ROOT MOUNT WAITING FOR GRAID-NVIDIA

It eventually fails, it mentions something about
Code:
GEOM_RAID NVIDIA-1 VOLUME STRIPE
and it dumps me out to a mount prompt.

I can boot up my old kernel, but since I just did a buildworld, buildkernel and install kernel then rebooted, some of my things aren't working now because of version incompatibilities like zfs, so all my main storage is inaccessible at the moment.

So I just have my rootfs on a single SATA disk, I'm not doing any RAIDs or anything special for most of my system. the zfs is just storing data like multimedia, etc. The only system thing it's storing is the /home partition, everything else is on my ufs formatted SATA drive with labels, for example,

Code:
Filesystem              Size    Used   Avail Capacity  Mounted on
/dev/label/rootfs         2G    676M    1.1G    37%    /

Wondering how I can find this hidden Nvidia metadata and wipe it out or disable this nvidia raid? My freebsd FreeBSD 9 box is running on an Nvidia board with an intel chipset now, there shouldn't be anything Nvidia showing up anywhere.

Any help would be greatly appreciated!

Thanks,
-Jerry
 
Also apparently renaming /boot/kernel.old/ directory to just kernel allows the zfs modules to load properly, I don't know why they won't load when I manually specify to boot kernel.old so I'm back in business I guess 100% but unable to proceede with any upgrades until I hammer this out :|

I took some pics with my phone during bootup,
https://www.dropbox.com/s/ptkue1ecgbyk2y1/IMG_20120530_235129.jpg
https://www.dropbox.com/s/hik0vqlewto6amn/IMG_20120530_235159_1.jpg
https://www.dropbox.com/s/8llupgzhotoja8j/IMG_20120530_235201_1.jpg
 
mav@ described this a couple of days ago. graid(8) can deal with the RAID metadata even if the disk is not on that controller any more.
# graid list
to show the RAID, then delete -f name to erase the metadata. The danger is that it will wipe out some other data on that disk.
 
Been reading all of those man pages, graid/glist/glabel. Looks like I had to graid load -v then graid list -a to have it show up,

Code:
Geom name: NVIDIA-1
Metadata: NVIDIA
Consumers:
1. Name: ada0
   Mediasize: 320072933376 (298G)
   Sectorsize: 512
   Mode: r0w0e0
   ReadErrors: 0
   Subdisks: (null)
   State: NONE

So since you mentioned a data loss issue, if I copy all the contents of this disk to another disk temporarily then erase the metadata and then copy the contents back, would that be safe? Perhaps a full tar backup of my ada0 partitions to my zfs array or will it possibly erase something like partition tables, lower level information that I can't just back up with a tar command?

Thanks,
-J
 
jbeez said:
So since you mentioned a data loss issue, if I copy all the contents of this disk to another disk temporarily then erase the metadata and then copy the contents back, would that be safe? Perhaps a full tar backup of my ada0 partitions to my zfs array or will it possibly erase something like partition tables, lower level information that I can't just back up with a tar command?

Yes, a full backup is the only way to be sure. dump(8)/restore(8) like kpa said to back up the filesystems. That won't back up the partition table, see gpart(8) backup for that.

With that, even if graid(8) destroy wipes something out, you can repartition and restore. Or just erase the beginning and end of the disk with dd(1) to manually erase the metadata.

If the NVidia motherboard is still available, the BIOS menus might be able to destroy the RAID without wiping other data.
 
Hello,

Thank you all for the replies.

Is there a way for me to disable this automatic RAID functionality at boot time in the new kernel? I'd like to do that until I have a chance to meticulously back up the various bits of data and partitioning I have on this disk in a way that will make my restore smooth.

Thank you,
-Jerry
 
Those with leftover BIOS RAID metadata that hasn't been wiped out by an MBR at the start of the disk or a GPT at the start and end of the disk. Should not be a lot of them, because most will still be part of a RAID that can be used.

So: destroy the RAID from the BIOS before reusing disks from it.
 
Example screenshot of problem (and solution)

I had a similar problem installing FreeBSD9.1-RC3 on a HP DL320 G3 which had previously contained Windows Server. The hardware raid is being recognized during boot, but not enough to actually be used. The boot process stops on "Root mount waiting for: GRAID" and then gives up after some time (see screenshot). The solution was to go into hardware raid config and remove the raid. Having support for some common hardware raids in FreeBSD would be great.
 

Attachments

  • graid.png
    graid.png
    12.7 KB · Views: 312
jbeez said:
Hello,

Thank you all for the replies.

Is there a way for me to disable this automatic RAID functionality at boot time in the new kernel? I'd like to do that until I have a chance to meticulously back up the various bits of data and partitioning I have on this disk in a way that will make my restore smooth.

Thank you,
-Jerry
I was wondering the same thing as I'm having the same issue after upgrading to 9.1-RELEASE. I can't seem to get rid of the metadata with:
# graid delete -f [i]NAME[/i]
Once that it done, it isn't shown by
# graid list -a
anymore, but the problem persists and the metadata reappears upon reboot. There is no option to the destroy the array in the BIOS. (It's fake software RAID.) Presumably the metadata is somehow not actually deleted or recreated by the BIOS - the feature is disabled, so I can only assume it's a bug. If it is it doesn't appear to be fixed in later versions of the BIOS. The mainboard is the ASUS M2N-LR/SATA.

If there is a way to disable the automatic RAID functionality that would seem to be the best, if short term, solution, rather than removing the disk and hoping a fresh install would fix it.

Thanks in advance.

Edit: Solved by the post below. Many thanks to Terry Kennedy.
 
Nulani said:
... but the problem persists and the metadata reappears upon reboot.
Back in "the old days" it was necessary to do:
Code:
sysctl kern.geom.debugflags=16
to let things do actual writes to disks the system thought were in use. Normally, the utilities would report an error rather than silently failing to update the data.

I also seem to recall cases where changes were overwritten by the prior data when the disks were unmounted (for example, during a system shutdown / reboot).
 
The debugflags setting overrides a safety, allowing writes to devices that are in use. If that is the problem (it almost never is), the user will see a message about the write not being allowed.

If it were me, I'd connect the disk through a different controller, one that will not recognize the metadata. An external USB adapter would work. Then erase the first and last megabyte of the disk with dd(1). It might be possible to turn off the RAID function in the BIOS and do the same thing.
 
Back
Top