Solved CentOS 5.5 in bhyve?

I've got some legacy proprietary software that refuses to install on anything other than CentOS 5.5, so I'm trying to get that to run under bhyve. However, I can't get it to even install without it dying:
Code:
# vm install vnd COS-55_1.iso
# tail -n 6 vm-bhyve.log
Aug 19 22:27:56:  [bhyve console: -l com1,/dev/nmdm2A]
Aug 19 22:27:56:  [bhyve iso device: -s 3:0,ahci-cd,/vm/.iso/COS-55_1.iso]
Aug 19 22:27:56: starting bhyve (run 1)
Aug 19 22:27:56: bhyve exited with status 134
Aug 19 22:27:56: destroying network device tap2
Aug 19 22:27:56: stopped
The configuration for the VM is basically verbatim from the template for CentOS 6. While I realize I'll ultimately need to edit it to load the correct kernel name after it's installed, is there any reason why CentOS 5.5 can't run under bhyve?
 
Booting on the console seems to crash at the following point for me -

Code:
ACPI: Core revision 20060707
Abort trap

Try changing the install lines in the config file as below. I was able to get into the installer with this (basically adding acpi=off)

Code:
grub_install0="linux /isolinux/vmlinuz acpi=off"
grub_install1="initrd /isolinux/initrd.img"

Once installed I'd suggest installing grub2 in the guest if possible and letting it boot using that rather than putting the kernel/initrd load lines and version numbers directly into the guest config. Cleaner and easier in the long run.
 
I've never worked with bhyve but I'm fairly familiar with many other virtualization solutions. The only thing I'd like to add is that disabling acpi on the guest means that the host cannot shut down the guest properly. Therefore, in an event of a shutdown of the host (due to different reasons, such as powering down due to power failure or just maintenance) the guest vm will be simply killed. This can result in corruption and data loss.

If there's no way around disabling acpi to make your guest run on the host you can keep in mind that most virtualization hosts provide some sort of guest agent (eg. the qemu-guest-agent on qemu or the vmware tools on ESXi). This agent will usually take care of gracefully shutting down the guest without using acpi. However, I have no idea whether bhyve supports something like that (and whether you can get it working on your CentOS 5.5 guest.
 
I'm not sure if CentOS 5 already was able to work with virtio devices. This might be the reason why it's crashing, the CentOS 6 template uses virtio-net and virtio-blk. I would try changing the disk emulation to ahci-hd first.
 
Booting on the console seems to crash at the following point for me
I'm curious, what's the best way to watch the console output immediately after starting? I'm guessing something like vm install vnd COS-55_1.iso && vm console vnd?

Try changing the install lines in the config file as below. I was able to get into the installer with this (basically adding acpi=off)
ACPI was indeed the issue. Thanks!

Once installed I'd suggest installing grub2 in the guest if possible and letting it boot using that
What does the configuration look like for vm-bhyve to have it use grub from within the guest? Just simply loader=""?

Edit: Hrm, it looks like it installs an older version of grub, but in theory I think you're supposed to use loader="bhyveload" to use the grub in the guest itself. I've given up trying to get it working using the guest's grub, but I did get it to boot by adapting the configuration the guest installs to the newer grub2, using this:
Code:
grub_run0="linux /vmlinuz-2.6.18-194.el5 ro root=/dev/VolGroup00/LogVol00 acpi=off"
grub_run1="initrd /initrd-2.6.18-194.el5.img"
Also, if the guest used grub2, it looks like you could still use bhyve's grub but pull the config from the guest with some grub_run_partition/grub_run_dir/grub_run_file magic.

...disabling acpi on the guest means that the host cannot shut down the guest properly. Therefore, in an event of a shutdown of the host (due to different reasons, such as powering down due to power failure or just maintenance) the guest vm will be simply killed.
I'm aware it's not ideal. My ultimate goal (though I don't know if it will work) is to install the proprietary software, then update to a much newer CentOS. The nice thing about doing it in a VM is it's trivial to roll back in case something doesn't work, just by reverting to a snapshot.

I'm not sure if CentOS 5 already was able to work with virtio devices.
Both devices (disk and ethernet) are appearing during the installation, so I assume it supports them. Will update this post if for some reason they don't work post-install.
 
I'm curious, what's the best way to watch the console output immediately after starting?

Code:
vm -f start|install guest

The -f option runs in foreground mode so everything outputs to stdout. Only downside is that, as it's direct to stdout, you can't get out of it without shutting down the guest.**

If you install & use tmux ( vm set console=tmux), there's also a -i interactive mode that starts straight into a tmux session. You can then use Ctrl+B, D to detach.

What does the configuration look like for vm-bhyve to have it use grub from within the guest?

I think you've already worked this out but you still need loader="grub" in the config file. By default this will look for /boot/grub/grub.cfg in the guest. If that file doesn't exist you basically have two options -

1) Specify the boot commands via the config file. Downside of this is that you have to work them out first, then possibly change them if you run an upgrade or make changes inside the guest.
2) Install grub2 in the guest and have it generate its own grub.cfg file. If this doesn't go into the exact path above, you can use the run_dir/run_file options to point to it.

(See https://github.com/churchers/vm-bhyve/wiki/Configuring-Grub-Guests for more info on using the grub2-bhyve loader)

As joel says, you do lose the ability to poweroff the vm without doing from inside the guest. I did also try UEFI with & without CSM but this is the only way I could get 5.5 to boot.

**In the most recent version the command should be vm start -f guest. The options have all moved to after the subcommand to make everything more consistent. This version isn't in ports yet through so you won't be using it unless you are getting vm-bhyve straight from GitHub.
 
Why not simply use vm console <vmname>?

I assume this is aimed at me.

The foreground/interactive mode catch all the output from the guest, right from the beginning. Connecting to the console after starting can sometimes miss the start of the bootloader process, especially if it outputs errors and fails immediately. You get to see all this output if you use -f/-i. In some cases the boot menu can also appear screwed up or invisible if you connect after it's already been drawn to screen (that was more an issue with cu than tmux as you only see output from after you connect).

I pretty much always use tmux now (always found the exit keys for cu temperamental), and find interactive mode by far the nicest way to handle booting into installers. Foreground works just as well for installs but as mentioned, you can't get the console back unless you shut the guest down.
 
Back
Top