How to include Xen as Host kernel (on FreeBSD) in my custom kernel

Hi people, please help me.
How to include XenAsHost kernel in my custom kernel?
I install XenAsHost (ver. 47) in FreeBSD (11.2-RELEASE) from ports and want add option in to kernel, how to?
I have my previous custom kernel (name is "Core"). In the "Core" i build some are needed options (pf, IPv6-disable and ...). So, how to options from "xen-kernel" integrate to my "Core" or as modify "xen-kernel" so as add options.
Thank you.
 
I had Xen DOM0 up before moving to bhyve. I really didin't know what I was doing which made it harder.
The one tip I can give you is get libvirtd up and running before xen and then make a xen.xml for it.
There is a command console called virsh that is essential to getting all this running.
Once you do get it setup then you can use the gui VirtManager to make VM's.
It worked well once I figured it out. I was seriously disappointing by rudimentary I/O speeds. Net and Disk.
I never tried passthrough only virtio drivers.

I won't say it was a waste of time but it was.
 
Indeed, did you consider bhyve instead? It's quite simple now using sysutils/vm-bhyve to manage your vms. If there isn't a specific requirement that only XEN can fullfill, I'd strongly recommend bhyve.

I had a server running XEN with Linux and LVM-based storage for the vms and completely replaced it with FreeBSD, jails and, where needed, bhyve with virtual disks in zvols.

There was just one little problem, windows guests didn't work with virtio-blk disks, but a small patch fixes that. If someone needs it, I can post it here when I'm at home.
 
Looking back through the instructions SirDice posted I can see I went a different route.
I went into Virtualization wanting a GUI and libvirt is required for Virt-Manager.
So I went that tortured route. Nothing wrong with it as I liked the colored status bars for my VM's.

I did not know virsh came from KVM.
 
Just for clarity, there is no need to build a custom kernel for Xen. Download the ports and configure.

You do have to run in Legacy BIOS mode not UEFI.
 
Handbook: 21.8. FreeBSD as a Xen™-Host

Handbook: Chapter 8. Configuring the FreeBSD Kernel

Note that Xen on FreeBSD is still highly experimental and unstable. It also has a whole bunch of caveats.
Also note that Xen 4.7 is deprecated:
Code:
DEPRECATED: This port is about to be removed, please update to a newer Xen version
Thank you.
Yes, i read these but i am not about deploying XenAsHost.
I ask about how to include in xen-kernel other option:
Code:
#options        INET6                   # IPv6 communications protocols
Because after restart for apply settings for Xen will be:
Code:
# sysrc -f /boot/loader.conf xen_kernel="/boot/xen"
This xen-kernel have support "INET6", how it disable.

You understand me? Please sorry me for my bad English (i study).
 
Indeed, did you consider bhyve instead? It's quite simple now using sysutils/vm-bhyve to manage your vms. If there isn't a specific requirement that only XEN can fullfill, I'd strongly recommend bhyve.

I had a server running XEN with Linux and LVM-based storage for the vms and completely replaced it with FreeBSD, jails and, where needed, bhyve with virtual disks in zvols.

There was just one little problem, windows guests didn't work with virtio-blk disks, but a small patch fixes that. If someone needs it, I can post it here when I'm at home.
Thank you. "Bhyve" is nice idea. I read about sysutils/vm-bhyve on FreshPorts and i see good words "BSD/Linux/Windows guest support".
But support guest for managment or support run this guest Operation System?
 
This xen-kernel have support "INET6", how it disable.
I see your dilemma now. The first place I looked was 'configuration options' for the xen-kernel port. None.
Have you dug thru the ports source? Compiling a port puts all the source on your computer.
 
I see your dilemma now. The first place I looked was 'configuration options' for the xen-kernel port. None.
Have you dug thru the ports source? Compiling a port puts all the source on your computer.
Thank you. You very good gave me a hint. So, where is this file for add or remove kernel options?
 
Well you need a ports tree.
Then cd /usr/ports/emulators/xen-kernel
Then you compile the port (make). It will download files from the internet and build the port.
The /Files directory will contain the raw files. It could be a 'pre-compiled' kernel. I dunno. You need to look.

A different path to chose might be a sysctl. When I looked around everything was Linux based.
It seems xenbr0 has a sysctl to limit IPv6 on Linux.
So maybe dig around a see what sysctl's exist for FreeBSD Xen if any.
 
This blog is where I found my instructions for FreeBSD dom0. As you can see it uses libvirt.
You can control/disable INET6 with libvirt.
I saw instructions for that.
 
You understand me?
Oh, right. You're asking about your own custom kernel to run on Xen (as a Xen guest).

You will need to build a custom kernel: 8.2. Why Build a Custom Kernel?

So, how to options from "xen-kernel" integrate to my "Core" or as modify "xen-kernel" so as add options.
You need these options:
Code:
# Xen HVM Guest Optimizations
# NOTE: XENHVM depends on xenpci.  They must be added or removed together.
options         XENHVM                  # Xen HVM kernel infrastructure
device          xenpci                  # Xen HVM Hypervisor services driver

There's probably some more but I don't have access to Xen. Just start with a GENERIC kernel, and have a look at what's being detected. The GENERIC kernel should have everything included already.
 
I did notice the difference between xen-kernel 4.12 (Built for FreeBSD 12) and xen-kernel 4.11(Built for FreeBSD 11)
 
Thank you. "Bhyve" is nice idea. I read about sysutils/vm-bhyve on FreshPorts and i see good words "BSD/Linux/Windows guest support".
But support guest for managment or support run this guest Operation System?
In case I understood this question correctly, here's the clarification you need:
  • bhyve supports all kinds of guest OS', definitely BSDs, Linux and Windows. As mentioned before, I had a problem with a Windows guest using virtio-blk, but this can be fixed with a simple patch.
  • sysutils/vm-bhyve provides management-tools for bhyve VMs, these tools are console/commandline tools, so they will run only on the machine actually hosting the bhyve VMs.
 
hello everybody again. Thank all. I really liked it "bhyve" especially "supports all kinds of guest OS "!
I used to read about it before 1-2 years ago, but there was only support for Unix, so I forgot about Bhyve.
Very very good and simple. I have not fully understood it completely, but already fell in love with him.
Xen - bullshit.
Thank you forums.freebsd.org
 
Indeed, did you consider bhyve instead? It's quite simple now using sysutils/vm-bhyve to manage your vms. If there isn't a specific requirement that only XEN can fullfill, I'd strongly recommend bhyve.

I had a server running XEN with Linux and LVM-based storage for the vms and completely replaced it with FreeBSD, jails and, where needed, bhyve with virtual disks in zvols.

There was just one little problem, windows guests didn't work with virtio-blk disks, but a small patch fixes that. If someone needs it, I can post it here when I'm at home.
I have stopped on "bhyve" (vmm load module) for studying so far. How to configure when to restart a virtual machine (windows) did not generate a process completion signal "0" and bhyve do not shutdown.
Code:
man bhyve
...
EXIT STATUS
     Exit status indicates how the VM was terminated:

     0       rebooted
     1       powered off
     2       halted
     3       triple fault
...
That is, I need to write a script to process these signals or is there another way I have not read it and did not know?
 
Regarding Windows guests, here's the patch that enables Windows to use virtio-blk disks:
Code:
Index: usr.sbin/bhyve/block_if.h
===================================================================
--- usr.sbin/bhyve/block_if.h    (revision 340722)
+++ usr.sbin/bhyve/block_if.h    (working copy)
@@ -41,7 +41,7 @@
#include <sys/uio.h>
#include <sys/unistd.h>

-#define BLOCKIF_IOV_MAX        33    /* not practical to be IOV_MAX */
+#define BLOCKIF_IOV_MAX        128    /* not practical to be IOV_MAX */

struct blockif_req {
     struct iovec    br_iov[BLOCKIF_IOV_MAX];

Without this, you must use ahci-hd drive emulation for Windows guests, which is considerably slower. So, better apply this patch (and build and install world from source).

Then it's even possible to install Windows directly on a virtio-blk disc. You have to additionally attach the .iso with the Windows virtio drivers during installation. You can download the drivers here: https://docs.fedoraproject.org/en-U...tual-machines-using-virtio-drivers/index.html
 
Regarding Windows guests, here's the patch that enables Windows to use virtio-blk disks:
Code:
Index: usr.sbin/bhyve/block_if.h
===================================================================
--- usr.sbin/bhyve/block_if.h    (revision 340722)
+++ usr.sbin/bhyve/block_if.h    (working copy)
@@ -41,7 +41,7 @@
#include <sys/uio.h>
#include <sys/unistd.h>

-#define BLOCKIF_IOV_MAX        33    /* not practical to be IOV_MAX */
+#define BLOCKIF_IOV_MAX        128    /* not practical to be IOV_MAX */

struct blockif_req {
     struct iovec    br_iov[BLOCKIF_IOV_MAX];

Without this, you must use ahci-hd drive emulation for Windows guests, which is considerably slower. So, better apply this patch (and build and install world from source).

Then it's even possible to install Windows directly on a virtio-blk disc. You have to additionally attach the .iso with the Windows virtio drivers during installation. You can download the drivers here: https://docs.fedoraproject.org/en-U...tual-machines-using-virtio-drivers/index.html
Thank you. My problem from Xen is solved - i will use sysutils/vm-bhyve. Really powerful!
And what about support with migration from others hypervisors (kvm or hyper-v) in to "vm-bhyve"?
And migration or live-migration in "vm-bhyve" two and more hosts?
 
And what about support with migration from others hypervisors (kvm or hyper-v) in to "vm-bhyve"?
And migration or live-migration in "vm-bhyve" two and more hosts?
These are both features I don't need, so I can't tell you too much about them.

The first should be possible manually, it's mostly a matter of being able to convert the virtual disk and create a matching bhyve configuration.

As for the second, AFAIK live migration is a feature being worked on, but not yet available. You can always migrate a VM that's powered off, that's pretty comfortable when using a zvol for the virtual disk (zfs send/receive). If you want to minimize downtime, you could zfs send a snapshot from the running VM, then power it off and only send the diff using a second snapshot. But again, if I didn't miss anything, live migration is not there yet, at least officially.
 
Hi again. I need your help. I use vm-bhyve, it works perfectly. I have one options in config file which i do not understand. How his remove or disable?
My log run and work fine guest mashine:
Code:
May 08 14:15:49: initialising
May 08 14:15:49:  [loader: uefi]
May 08 14:15:50:  [cpu: 1,sockets=1,cores=1]
May 08 14:15:50:  [memory: 2G]
May 08 14:15:50:  [hostbridge: amd]
May 08 14:15:50:  [com ports: com1]
May 08 14:15:50:  [uuid: 3f4c7262-7165-11e9-8590-f4f26d026783]
May 08 14:15:50:  [utctime: no]
May 08 14:15:50:  [debug mode: no]
May 08 14:15:50:  [primary disk: win10.img]
May 08 14:15:50:  [primary disk dev: file]
May 08 14:15:50: initialising network device tap0
May 08 14:15:50: adding tap0 -> vm-ForVMS (ForVMS addm)
May 08 14:15:50: bring up tap0 -> vm-ForVMS (ForVMS addm)
May 08 14:15:50: booting
May 08 14:15:50:  [bhyve options: -c 1,sockets=1,cores=1 -m 2G -Hwl bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -p 1:1 -U 3f4c7262-7165-11e9-8590-f4f26d026783]
May 08 14:15:50:  [bhyve devices: -s 0,amd_hostbridge -s 31,lpc -s 4:0,ahci,hd:/virtmachins/win10/win10.img,nocache,cd:/virtmachins/win10/Windows10x64ukr-1803.iso -s 5:0,virtio-net,tap0,mac=58:9c:fc:01:bf:03 -s 6:0,fbuf,tcp=10.144.40.6:5999,w=800,h=600,vga=io -s 7:0,xhci,tablet]
May 08 14:15:50:  [bhyve console: -l com1,/dev/nmdm-win10.1A]
May 08 14:15:50:  [bhyve iso device: -s 3:0,ahci-cd,/virtmachins/.config/null.iso]
May 08 14:15:50: starting bhyve (run 1)

I want remove from guest mashine, how to?
6480

My config file for this guest (do not have line about "...device -s 3:0...")
Code:
loader="uefi"
cpu="1"
cpu_sockets="1"
cpu_cores="1"
cpu_threads="1"
memory="2G"
wired_memory="no"
hostbridge="amd"
ignore_bad_msr="yes"
bhyve_options="-p 1:1"
utctime="no"
uuid="3f4c7262-7165-11e9-8590-f4f26d026783"
ahci_device_limit="8"
disk0_type="ahci-hd"
disk0_dev="file"
disk0_name="win10.img"
disk0_opts="nocache"
disk1_type="ahci-cd"
disk1_name="Windows10x64ukr-1803.iso"
network0_type="virtio-net"
network0_switch="ForVMS"
network0_device=""
network0_mac="58:9c:fc:01:bf:03"
network0_span="no"
passthru0=""
start_slot="4"
# install_slot
# The slot to use for an installation ISO. By default this is 3,
# which is the first available slot with the original UEFI firmware.
# Using this makes sure the ISO is the first device, and leaves
# 4-6 available for hd devices. Being able to change this may
# be useful for non-UEFI guests, especially if a passthru device
# requires this slot.
#
install_slot="3"
virt_random=""
graphics="yes"
graphics_port="5999"
graphics_listen="10.144.40.6"
graphics_res="800x600"
graphics_wait="auto"
graphics_vga="io"
xhci_mouse="yes"
 
Remove these two:
Code:
disk1_type="ahci-cd"
disk1_name="Windows10x64ukr-1803.iso"

You don't need to add a CD device for the install, it's automatically added when you use vm install ... and then automatically removed after the fist boot.

 
How to remove from system (May 08 14:15:50: [bhyve iso device: -s 3:0,ahci-cd,/virtmachins/.config/null.iso])
ahci-cd disk drive - This let me have it so.

6481
 
Back
Top