bhyve centos vm installation "not a correct XFS inode"

BonHomme

Member

Reaction score: 5
Messages: 63

Because of several reasons I must install a centos7 virtual machine on a zfs filesystem on freebsd 11 (It must be centos, there is no other option for that)

Furthermore I need to be able to send a ZFS snapshot of the virtual machine to a remote backup system on a regular base. Which means that (as it seems) I can not use a ZFS volume as a virtual disk.So, as far as I know, the only other option is to use a truncated virtual disk

Now the problem is that after the Centos7 installation of bhyve (which works fine) the newly installed system will not boot because of the "not a correct XFS inode" error, which seems to be a bug in grub2.

The solution for that should be to create an ext2 boot partition.

Unfortunately the centos7 installation in the bhyve console does not offer an option to configure an ext2 boot partition as, as far as I know, it is only possible to install centos7 on xfs (boot) partitions in the console.

So I wonder if somebody can help me with the following question: Is it possible with bhyve/centos7 to configure a truncated virtual disk with an ext2 boot partition and use that in the bhyve installation console screen? If so, how should I do that?
 
  • Thanks
Reactions: Oko
OP
OP
B

BonHomme

Member

Reaction score: 5
Messages: 63

Because I could not solve the problem in my other post "bhyve centos vm installation "not a correct XFS inode" I decided to take a look at Chyves as Chyves creates a virtual centos7 machine with a non-xfs boot partition. But unfortunately does this on a ZFS volume.

On itself the chyves centos7 virtual machine works fine, but it is installed on a ZFS volume by default, so I can not make backups on a remote system with the ZFS send and receive option.

Therefore I tried to install the Chyves virtual centos7 machine on a truncated virtual disk but could not get that working for the same reason as in my other post mentioned above.

As it is possible with Chyves to add a truncated virtual disk to the virtual machine, I also tried to move the complete system from the ZFS volume to the truncated virtual disk, but I also could not get that working.

I would appreciate it very much if sombody could help me setting up a centos7 virtual machine on a truncated virtual disk either on Chyves or Bhyve
 

Oko

Daemon

Reaction score: 770
Messages: 1,620

Because of several reasons I must install a centos7 virtual machine on a zfs filesystem on freebsd 11 (It must be centos, there is no other option for that)

Furthermore I need to be able to send a ZFS snapshot of the virtual machine to a remote backup system on a regular base. Which means that (as it seems) I can not use a ZFS volume as a virtual disk.So, as far as I know, the only other option is to use a truncated virtual disk

Now the problem is that after the Centos7 installation of bhyve (which works fine) the newly installed system will not boot because of the "not a correct XFS inode" error, which seems to be a bug in grub2.

The solution for that should be to create an ext2 boot partition.
Thank you so much for this post. We were about to evaluate Bhyve for our organization and your post is very useful to me as we have very similar user cases. We typically use Springdale Linux instead of CentOS which is a Princeton University clone of Red Hat. I would guess that Springdale Linux will suffer from the same Grub2 bug and can't be booted as well? This is too bad and means that we will have to stick with KVM and Linux for now. I don't want to go down ext2 route at all.
 
OP
OP
B

BonHomme

Member

Reaction score: 5
Messages: 63

This is too bad and means that we will have to stick with KVM and Linux for now. I don't want to go down ext2 route at all.
I would not drop the Bhyve option to soon as it does not seem to be a problem in Bhyve, but a problem in the combination of xfs, zfs and grub2 and maybe somebody here might have a solution

The core of the problem is that grub2 (because of a bug?) does not seem to know how to handle an xfs boot partition. A thing that could easily be solved by creating an extension 2,3 or 4 boot partition.
Now the stupid thing is that centos7 in its install menu (in the Bhyve console) does not offer the possibility to configure the (virtual) disk (like for instance ubuntu) the way you want. It only offers two options: btrfs and xfs.
Btrfs on top of ZFS seems to be quite counter productive and because I can not get xfs to boot, I am stuck. At least for now and as far as it concerns my own knowledge about this subject.

But ofcourse that does not mean there should not be a solution for this problem. Maybe there is, but I don't know it.

The second reason that I am more or less stuck with this problem is that I also really want to use ZFS.

On itself the whole thing works fine when you set the VM up with Chyves, a Bhyve front end manager.
The automatic installation of Chyves is able to setup an extension 3 or 4 boot partition, which means it should be possible to setup an ext3 or 4 boot partition with Centos.
But I could not figure it out how Chyves does that, because if I try to install Centos on a truncated virtual disk instead of on a ZFS volume, Centos like with Bhyve again installs xfs on the boot partition.

However, please keep in mind that the only reason why I need a truncated virtual disk is that I want to make optimum use of the ZFS send and receive option for backups because it is faster than for instance rsync. And Chyves by default installs the VM virtual disk on a zfs volume with which it is impossible to use the ZFS send option.

Nevertheless he centos virtual machine is working fine when installed with Chyves on xfs. And, though I did not test that yet, I believe making backups with rsync within the VM itself should not give any problem. And making ZFS snapshots of the VM on the Freebsd host itself also works without problems.

The only disadvantage is that you can not use the ZFS send option to make backups of your VM to remote machines and have to use something like Rsync instead.
 
OP
OP
B

BonHomme

Member

Reaction score: 5
Messages: 63

Why do you think you can't use send/recv with a ZFS volume?
Sorry, but I forgot to mention that I experienced that a volume can not be sent to an OpenZFS system, which is the case in my situation. See also http://justinholcomb.me/blog/2016/05/23/zfs-volume-manipulations.html

Moving a ZVol using dd

Typically when you want to move a ZVol from one pool to another, the best method is using zfs send | zfs recv. However there are at least two scenarios when this would not be possible: when moving a ZVol from a Solaris pool to a OpenZFS pool or when taking a snapshot is not possible such as the case when there are space constrains. While a zfs send | zfs recv can be done across pools of different zpool versions [1-37;5000], it can not be done across zfs versions [1-6]. This is a problem when sending a dataset from Solaris zfs version 6 and receiving on a OpenZFS based system which is on version 5.


In this situation because a snapshot can not be created, it is recommended to turn off all services that can modify the ZVol as having data at different states will surely cause issues.

Maybe it could work from Freesb to Freebsd but also there seem to be problems with sending ZFS Volumes I read in another article, which I unfortunately can not find anymore. I did not test that yet. Anyway sending ZFS volumes seems not to be as easy as it looks like.

Beside that even if the Freebsd to Freebesd solution is working properly I don't have the right environment available for revolving external disks right now. So I would like the problem with the "not a correct XFS inode" get solved anyway, even if I don't need that for sending ZFS volumes anymore
 

Oko

Daemon

Reaction score: 770
Messages: 1,620

I would not drop the Bhyve option to soon as it does not seem to be a problem in Bhyve, but a problem in the combination of xfs, zfs and grub2 and maybe somebody here might have a solution

The core of the problem is that grub2 (because of a bug?) does not seem to know how to handle an xfs boot partition. A thing that could easily be solved by creating an extension 2,3 or 4 boot partition.
I might have given you too much credit. What do you mean by grub2 can't see xfs boot partition? This would have to be bhyve/zfs specific as I have over 25 Springedale (Red Hat clone) servers and I can assure you that they boot fine from xfs partition including RAID 1 when running on the bare metal. ext2 is not production quality file system so I am not interested in any work around involving ext2.


I was not following the part of your post about "Btrfs on top of ZFS seems". Btrfs is a vaporware and if you care about your data you will stay away as far as possible for Btrfs. Red Hat guys are smart so they offer Btrfs as an option only for your own data partition. One would have to be stupid to put anything on that shit.


I also carefully read the rest of the post and your second post. You seems quite confused about ZFS snapshots, replication, and use of ZFS send for backup/failover. Your problems with OpenZFS are self inflicted problems. Why would I want to backup my FreeBSD server (virtual machines) on the non FreeBSD platform in particular if the platform is dead (illuminus is dead )?

The proper backup/failover of FreeBSD server or in this case virtual Bhyve host involves two servers with identical hardware specifications, running the same FreeBSD version and configured the same with one difference only. They are located at different physical location (some times at least 100 miles apart due to some regulations I have to follow).

Yes the only reason one would want to play with immature technology like Bhyve is to be able to take advantage of creating virtual images as a separate datasets on the top of ZFS pool so that you could easily take snapshot (role back if needed), replicate to remote machine (both daily incremental and weekly full replication) with possible script which will use the replication to automatically start an identical virtual instance on the backup server. With littlebit of network trickery you should be able to do not just hot migration but to have full blown failover. If anything of what I just said can't be done for whatever reason (for example Windows might ask for a new license on the backup server instance) there is absolutely zero reason to use FreeBSD and Bhyve in particular (IMHO of course).

rsync part of your posts are not very interesting. If I want to use rsync there is no reason using FreeBSD and ZFS in particular. Actually using rsync --delete with HAMMER on DragonFly is very interesting way of backing up legacy file systems like my OpenBSD ffs because fine grained HAMMER history enables you to slide through earlier versions of your document and have a full blown journal of legacy file system.
 

grehan@

Member
Developer

Reaction score: 82
Messages: 84

Are you able to share your config ? I've done a large number of Centos 7 installs with different filesystem configs and have not seen this - I'd like to try and reproduce it.
 
  • Thanks
Reactions: Oko
OP
OP
B

BonHomme

Member

Reaction score: 5
Messages: 63

Are you able to share your config ? I've done a large number of Centos 7 installs with different filesystem configs and have not seen this - I'd like to try and reproduce it.
It is, or at least was, a bug in grub2 https://bugzilla.redhat.com/show_bug.cgi?id=1220844 and when you google for "not a correct XFS inode" you will see that also other linux distributions have/had the same problem

It is not clear to me if this bug is solved in the meantime or if there is a workaround, as it already was there since 2013. If it is solved it would mean that grub2-bhyve is using an old version of grub2. Which is the reason why I posted my question on the Freebsdforum
 

girgen@

New Member
Developer

Reaction score: 9
Messages: 13

Also seeing this problem with RHEL7 fresh install. Any ideas how to work around it?

Code:
# cat device-cd.map
(hd0) /dev/zvol/tank/bhyves/rhel/disk
(cd0) /home/girgen/rhel-server-7.3-x86_64-dvd.iso
# grub-bhyve -m device-cd.map -r hd0 -M 16384 rhel
grub> ls
(hd0) (hd0,msdos2) (hd0,msdos1) (cd0) (cd0,msdos2) (host) (lvm/rhel-root) (lvm/rhel-swap)
grub> ls (hd0,msdos1)/
error: not a correct XFS inode.
grub> ls (lvm/rhel-root)/
error: not a correct XFS inode.
This is with a zvol:
Code:
zfs create -ps -V 48G -o volmode=dev tank/bhyves/rhel/disk
Installation from the ISO image was flawless.
 

girgen@

New Member
Developer

Reaction score: 9
Messages: 13

Are you able to share your config ? I've done a large number of Centos 7 installs with different filesystem configs and have not seen this - I'd like to try and reproduce it.
Can the zvol be the culprit?
 

paw

New Member

Reaction score: 1
Messages: 8

I had the same issue last night. I'm not sure it's zvol, as it happened with a image file too.

I've tried installation, CentOS 7 installs flawlessy and when I try to boot I get:
error: not a correct XFS inode as others.

So I had to install with UEFI and use ext4 -- works for now.
FreeBSD11 + bhyve (vm-bhyve) / CentOS 7 Core (minimal)
 
  • Thanks
Reactions: Oko

zosy

New Member


Messages: 8

still same problem here,
my ENV below:

FreeBSD 12-beta4 as a host
bhyve(vm-bhyve) install centos(minimal) x86_64 7.1804
default partidion by centos (LVM+xfs)
install it's fine , but get lot of error " not a correct XFS inode" when reboot, vm can't start.
 

Loyd Craft

New Member


Messages: 1

running into the same problem

freebsd 12-beta4 hist
chyves bhyve manager
default partition and standard partition
 

alphachi

Member

Reaction score: 7
Messages: 48

You can use bhyve with UEFI firmware to solve it, but notice current sysutils/bhyve-firmware will cause this bug, so it's unavailable.

At the present time, sysutils/cbsd is a workaround. In fact, you only need two files by cbsd, rather than running it. One is /usr/local/cbsd/upgrade/patch/efi.fd, and another is /usr/local/cbsd/upgrade/patch/efirefd.fd.

For example, the following snippet can run a CentOS 7.6 VM:
Code:
/usr/sbin/bhyve -AHP -c 2 -m 4g \
-l bootrom,/usr/local/cbsd/upgrade/patch/efi.fd \
-s 0,hostbridge -s 1,lpc \
-s 2:0,virtio-blk,/dev/zvol/zroot/centos7.hd0 \
-s 2:1,virtio-blk,/dev/zvol/zroot/centos7.hd1 \
-s 2:2,ahci-cd,/usr/local/cbsd/upgrade/patch/efirefd.fd \
-s 3,virtio-net,tap0 \
-s 4,fbuf,tcp=127.0.0.1:8443,w=1024,h=768 centos7
The first booting maybe crashes to reboot, but everything is OK starting from the second booting.
 
  • Thanks
Reactions: Ole

Ole

Active Member

Reaction score: 65
Messages: 102

You can use bhyve with UEFI firmware to solve it, but notice current sysutils/bhyve-firmware will cause this bug, so it's unavailable.

At the present time, sysutils/cbsd is a workaround. In fact, you only need two files by cbsd, rather than running it. One is /usr/local/cbsd/upgrade/patch/efi.fd, and another is /usr/local/cbsd/upgrade/patch/efirefd.fd.

For example, the following snippet can run a CentOS 7.6 VM:
Code:
/usr/sbin/bhyve -AHP -c 2 -m 4g \
-l bootrom,/usr/local/cbsd/upgrade/patch/efi.fd \
-s 0,hostbridge -s 1,lpc \
-s 2:0,virtio-blk,/dev/zvol/zroot/centos7.hd0 \
-s 2:1,virtio-blk,/dev/zvol/zroot/centos7.hd1 \
-s 2:2,ahci-cd,/usr/local/cbsd/upgrade/patch/efirefd.fd \
-s 3,virtio-net,tap0 \
-s 4,fbuf,tcp=127.0.0.1:8443,w=1024,h=768 centos7
The first booting maybe crashes to reboot, but everything is OK starting from the second booting.
Yes, I ran into this problem and at the moment I don’t know how to fix it with bhyve-firmware. CBSD is not affected due to used refind to boot VM (also old virtual machines with CentOS/Redhat deployed by CBSD and updated to latest CentOS/Redhat is not affected since CBSD a long time used refind to boot from hdd). But for boot from CD CentOS 7.6 you need CBSD 12.0.3 where CentOS profile also switched to refind by default. In other words for clarity, CBSD does not have any patches, just alternative boot method as work-around
 

Remington

Well-Known Member

Reaction score: 149
Messages: 499

It is, or at least was, a bug in grub2 https://bugzilla.redhat.com/show_bug.cgi?id=1220844 and when you google for "not a correct XFS inode" you will see that also other linux distributions have/had the same problem

It is not clear to me if this bug is solved in the meantime or if there is a workaround, as it already was there since 2013. If it is solved it would mean that grub2-bhyve is using an old version of grub2. Which is the reason why I posted my question on the Freebsdforum
I was able to reproduce the same problem you're experiencing with "not a correct XFS inode" using the manual method and sysutils/vm-bhyve. CBSD seems to solve the problem but I'm not too keen with so many CBSD files scattered around. I think there is a problem between sysutils/grub2-bhyve and Centos.

Ubuntu works very well as long as you select the 'Guided - use entire disk' but not the 'Guided - use entire disk and setup the LVM' as it will crash at booting.
 

Purkuapas

Member

Reaction score: 14
Messages: 54

CBSD seems to solve the problem but I'm not too keen with so many CBSD files scattered around.
Look at the feature request list from vm-bhyve users. If Matt implements this, you will get a cbsd with many files but vm-bhyve named ;-)

I think there is a problem between sysutils/grub2-bhyve and Centos.
Unfortunately not. If you try to load CentOS 7.6 with

loader="uefi"
graphics="yes"


you will also get a boot problem. It looks like a problem with sysutils/bhyve-firmware (too?) ;-(
 
Top