Solved Mounting shared folder in FreeBDS 13 guest: VM reboots in Ubuntu 20.04 host

I'm just refreshing a post of mine by months ago, having noticed that after the latest package upgrade of virtualbox ose additions, the issue is still there.
The terminal command for mounting the shared folder setted in Virtualbox for this machine works perfectly in Windows 10 host, while in Ubuntu 20.04 just reboots the virtual machine.
The command is (both used with doas and without it) :
mount -t vboxvfs mauro /home/mauro/shared/

being "mauro" the shared folder in Ubuntu setted in Virtualbox , and "shared" the one created in FreeBSD vm for mounting the shared folder "mauro".

Do anyone of you had already experienced this?
Thank you
 
Exactly, couldn't apply what suggested for my poor knowledge in that.
I would need simple-basic-guided tutorial on what needed.
Thank you
 
mauro@freebsd13:~ $ swapinfo -h
Device Size Used Avail Capacity

sorry, only blank result (no values)
 
Stop the guest, use Virtual box to create a second (virtual) hard disk, 32 GB, start the guest, edit /etc/fstab to use the new disk for swap, restart, edit /etc/rc.conf to enable core dumps, restart.

Apologies for brevity, writing from mobile device.

If you want/need a debug-enabled build of the base operating system: considerably more complex
 
ok thank you, before asking which lines to enter in fstab and rc.conf, better to point that my FreeBSD guest virtual machine consists in a.....physical usb pendrive. That is a a full Freebsd installation on a USB volume, bootable and fully working from the USB port of the pc when selected for booting. Said this, I then created a .vmdk file for this volume to be handled by Virtualbox as a virtual machine, so that I can also use it meantime when I'm on Win 10 or Ubuntu : it works well in both enviroments. The only oddity is this issue happening in Ubuntu hosting, while in Windows 10 no problem at all : when I recall the setted shared folder, it works as expected.
 
I've one Ubuntu SSD disk laying around ; but it's 20.10 version with 6.1.16-dfsg-6~ubuntu1.20.10.1 version of VirtualBox. I grabbed the vhd file from the VM-IMAGES, created the shared folder and tested within FreeBSD guest (virtualbox-ose-additions-nox11-6.1.26). I got no issues. Could you test the VM image from the current snapshot in your setup?
 
Just to add to the cacophony, I built a new FreeBSD 13.0-RELEASE-p4 system under VirtualBox 6.1.26r145957 on Windows 8.1 today.

I can mount C: drive from /etc/fstab:
Code:
C_DRIVE /sf_C_DRIVE vboxvfs late,rw 0 0
and I can cd /sf_C_DRIVE, but ls then panics the kernel.

Bug 255386 applies.
 
gpw928 can you try the same on the image from the link I pasted above? What version of additions do you have there ?
Code:
[f13.150] $ sysctl -n kern.osrelease kern.ostype
13.0-RELEASE-p4
FreeBSD
[f13.151] $ uname -K
1300139
[f13.152] $ pkg query %n-%v | grep virtualbox
virtualbox-ose-additions-6.1.26
virtualbox-ose-kmod-6.1.26
I'm downloading the vm image and will test it as soon as I can.
 
I downloaded and converted the qcow2 to a vdi and added it into the VirtualBox Manager. It booted fine.

I enabled vboxguest_enable and vboxservice_enable in /etc/rc.conf, and rebooted.

After rebooting it, I then concluded that I needed to install the virtualbox-ose-nox11-6.1.26_2 package, which I did. This ran the root out of space... so I did a "pkg clean". Still at 102%.

growfs is already at it's 4 GB limit for the root.

VBoxService is not running, maybe because of the disk space issue...

/usr/local/sbin is pretty much empty (pkg command only), in particular there is no /usr/local/sbin/mount_vboxvfs.

So it's not surprising that I can't mount C_DRIVE.

Not sure where to go from here...
 
You ran into some issues due to disk conversion I guess. I used the vhd disk directly with VirtualBox without any need of conversion. These images do have growfs_enable in rc.conf.
You are correct - you do need to install the additions yourself. Please try to use the vhd disk directly.
 
I've one Ubuntu SSD disk laying around ; but it's 20.10 version with 6.1.16-dfsg-6~ubuntu1.20.10.1 version of VirtualBox. I grabbed the vhd file from the VM-IMAGES, created the shared folder and tested within FreeBSD guest (virtualbox-ose-additions-nox11-6.1.26). I got no issues. Could you test the VM image from the current snapshot in your setup?

ok, made that as well, needless to say (only talking about me) that it takes nowhere; after completed the initial setup it refuses to reboot to a graphic environment due to an error in fstab.
I better give up if every attempt turns into another annoying issue.
I'm just a user, not an IT research specialist.
Keep everything in the actual status. The shared folder issue, repeat only happening in Ubuntu host, will be worked around with any usb volume connected and used to exchange files from/to, that is well working in all cases.
Thank you all once again
 
For the sake of the test I'd avoid putting this share into fstab (I'm assuming you did that). For the test I'd only install additions and mount there share. I.e.
a) download the vhd, put it into the VM
b) boot, install virtualbox additions
c) mount the share as you did before, report back

For me it's interesting to see where the issue is. As the VM is not crashing under different version of the VirtualBox it seems that there's some issues between host-guest VirtualBox versions.
I've seen that before.

In the end it's up to you. If you seek help we'll be here.
 
For the sake of the test I'd avoid putting this share into fstab (I'm assuming you did that). For the test I'd only install additions and mount there share. I.e.
a) download the vhd, put it into the VM
b) boot, install virtualbox additions
c) mount the share as you did before, report back

For me it's interesting to see where the issue is. As the VM is not crashing under different version of the VirtualBox it seems that there's some issues between host-guest VirtualBox versions.
I've seen that before.

In the end it's up to you. If you seek help we'll be here.

Hi Martin, no i did not get at the point of editing fstab, I've been stuck even before. I'm looking for accessing first of all into a desktop graphic environment, and I failed that.
In other words the matter is: i have that vhd machine created in Virtualbox, I get to a textual prompt, what to do next? How to handle it? Cannot go ahead.
I could setup and install a fully working Freebsd system in a bootable volume with its XFCE desktop, but no idea how to go ahead with this virtual machine, it refuses to boot in a desktop which I thought had correctly installed.
I understand its a bit complicated leading me to the target.
Thank you



f56d61ae-f196-44a0-80a1-799dd4caa607
 
I installed the vhd into VirtualBox, and grew it before booting. All went well.
I installed the virtualbox-ose-6.1.26_2 and virtualbox-ose-additions-nox11-6.1.26 packages (the latter was required to pull in /usr/local/sbin/mount_vboxvfs
When I mounted C_DRIVE, I got this on the console (and in /var/log/messages)
Code:
Sep 10 01:19:02 freebsd su[818]: phil to root on /dev/pts/0
Sep 10 01:19:27 freebsd kernel: VBOXVFS[1]: sfprov_mount: Enter
Sep 10 01:19:27 freebsd kernel: VBOXVFS[1]: sfprov_mount: path: [C_DRIVE]
Sep 10 01:19:27 freebsd kernel: sfprov_mount(C_DRIVE): error=0 rc=0
The mount worked:
Code:
root@freebsd:/home/phil # df
Filesystem      1K-blocks      Used    Avail Capacity  Mounted on
/dev/gpt/rootfs   5028264   3844144   781860    83%    /
devfs                   1         1        0   100%    /dev
/dev/gpt/efiesp     32765       869    31896     3%    /boot/efi
C_DRIVE         224815104 176074084 48741020    78%    /sf_C_DRIVE

root@freebsd:/home/phil # mount
/dev/gpt/rootfs on / (ufs, local, soft-updates)
devfs on /dev (devfs)
/dev/gpt/efiesp on /boot/efi (msdosfs, local)
C_DRIVE on /sf_C_DRIVE (vboxvfs, local)
The system panic'd as soon as I tried ls -la /sf_C_DRIVE.
I have generated a crash dump:
Code:
root@freebsd:/var/crash # ls -al /var/crash
total 87256
drwxr-x---   2 root  wheel        512 Sep 10 01:42 .
drwxr-xr-x  24 root  wheel        512 Sep 10 01:42 ..
-rw-r--r--   1 root  wheel          2 Sep 10 01:42 bounds
-rw-r--r--   1 root  wheel         84 Sep 10 01:42 core.txt.0
-rw-------   1 root  wheel        498 Sep 10 01:42 info.0
lrwxr-xr-x   1 root  wheel          6 Sep 10 01:42 info.last -> info.0
-rw-r--r--   1 root  wheel          5 Sep  2 04:55 minfree
-rw-------   1 root  wheel  112185344 Sep 10 01:42 vmcore.0
lrwxr-xr-x   1 root  wheel          8 Sep 10 01:42 vmcore.last -> vmcore.0
It was caused by a page fault:
Code:
root@freebsd:/var/crash # cat info.0
Dump header from device: /dev/gpt/swapfs
  Architecture: amd64
  Architecture Version: 2
  Dump Length: 112185344
  Blocksize: 512
  Compression: none
  Dumptime: 2021-09-10 01:41:37 +0000
  Hostname: freebsd
  Magic: FreeBSD Kernel Dump
  Version String: FreeBSD 13.0-STABLE #0 stable/13-n247062-c4bd6b589c8: Thu Sep  2 02:39:44 UTC 2021
    root@releng3.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC
  Panic String: page fault
  Dump Parity: 3219076436
  Bounds: 0
  Dump Status: good
The vmcore file compresses down to about 12 MiB.
If you want it, please let me know where to send it.
 
Nice gpw928. Last night I grabbed another ssd disk and installed Ubuntu 20.04 there. I was able to reproduce the bug on both release and snapshot version of VM.
grahamperrin Oh boy is that FFS bug we talked in another thread annoying ! I didn't use to loose so much data in MS-DOS times copying data to floppies.

Interestingly enough it seems this bug is in newer version of VirtualBox (Ubuntu 20.10 with 6.1.16-dfsg-6 is OK), VirtualBox 6.1.26-dfsg-3 in 20.04 has the problem.
I was using the same VMs with the tools 6.1.22 r144080 and 6.1.26 r145957.

Panic is always happening at the same place, same issue (bogus virtual address 0x15). It seems the structures it walks are different. I'm trying to dig around those structures.

edit: here at work I don't have access to my debugging setup ; I did try the Windows version 6.1.16 r140961 with the same VM. While it didn't panic it does behave wrongly - I'm able to create files but any attempt to delete results in text file busy.

When you look at core.txt.0 what is the "fault virtual address" ? I'm assuming it is a page fault. Can you also share the backtrace from it?
 
Panic is always happening at the same place, same issue (bogus virtual address 0x15). It seems the structures it walks are different. I'm trying to dig around those structures.

thank you for your effort in investigating and looking for a solution, even though my cooperation is not the one which can make the difference....
🙂
 
I'm curious to see if I can find something ; but VirtualBox is big and I'm not familiar with it that much. But I'll try till I find it interesting. :) In the end I may find nothing.
Last night I was bumped VirtualBox doesn't provide gdb stubs to VMs such as qemu or VMware do. It has its own debugger but it's too much windbg like - I don't feel comfortable with it.
 
I've played around on Ubuntu 20.04/FreeBSD 13 release. There's lot to understand but I think I found a problem. I was able to fix that in src, my tests were successful. I've shared my finding with more skilled people in the PR.

EDIT: @gpw928 also helped me test my module, it seems to be working. During the tests he discovered other bug not related to my fix.
The problem was in the vboxvfs module, one of the variable used in the code was not initialized as was used as wild pointer. PR is updated, I'm waiting for the response from maintainer.
 
Back
Top