Missing kernel?

Good afternoon, I'm try to install nvidia driver in freebsd 11, but when I do "pkg install nvidia-driver" I have an error:

Cannot install package: kernel missing linux support
pkg: PRE-INSTALL script failed


Ok, this only means that I need put in /etc/rc.conf:

linux_enable="YES"

And install linux_base-c6, reboot, and I can't load the module with:

kldload linux

I get:

kldload: can't load linux: no such file or directory

After giving up many things, I went to update freebsd with "freebsd-update fetch" and I get the error:

Cannot identify runing kernel

Is it because I can not load the kernel module, right?

Some information that might be useful:

uname -a return:

FreeBSD pc-diablillo 11.0-RELEASE-p1 FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 01:43:23 UTC 2016 root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64

ls /boot/kernel return:

no such file or directory

ls /boot return:

lrwxr-xr-x 1 root wheel 13 Sep 29 2016 /boot -> bootpool/boot

Thanks you.
 
It looks like your bootpool isn't mounted for some reason. Try mounting it with zfs mount -a. Do you have zfs_enable="YES" in /etc/rc.conf?
 
It looks like your bootpool isn't mounted for some reason. Try mounting it with zfs mount -a. Do you have zfs_enable="YES" in /etc/rc.conf?

Yes, I have "zfs_enable" in /etc/rc.conf.

I did try "zfs mount -a" but I have the same problem.
My /bootpool/ directory is empty.
 
Please post the output of zpool status bootpool and zfs list bootpool

zpool status bootpool return:

cannot open 'bootpool' : no such pool

And zfs list bootpool return:

cannot open 'bootpool' : dataset does not exist

PD: I'm using ZFS with MBR, not with GPT.
 
Try zpool import bootpool next and try running the previous commands again

zpool status bootpool:


pool: bootpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
bootpool ONLINE 0 0 0
ada0s1a ONLINE 0 0 0

errors: No known data errors

And zfs list bootpool return:

NAME USED AVAIL REFER MOUNTPOINT
bootpool 121M 1.80G 121M /bootpool

I need use the "-f" option for "zpool import bootpool"
 
Great!


I honestly have no clue. Did you import the bootpool on another system?

No, it's more, it's a newly installed freebsd. Now it works but I have to re-import bootpool and zfs mount -a to access / boot every time I reboot, is this normal?

I also have problems with xorg and with my nvidia graphics, but I think it will be more appropriate to open a thread for it.
 
I need fix the /boot problem because now I have problems with mouse and xorg, I need do:

zpool import bootpool
zfs mount -a


And wait for freebsd recognize my mouse before launch startx.
 
How did you install ZFS? Using the BSD installer? Because then I have a very good hunch what could be going on. Try using zfs get canmount on the main root file system. My guess is that it has the noauto value which means so much that it won't be automatically mounted during boot. Not using # zfs mount -a anyway.

I'm not quite sure why they set it up like that, but then again, I also fail to understand the need to add several unused filesystems as well. Now hinting at zroot/ROOT/default which is mounted on / by default. All I see there are 2 unused (and therefor wasted) filesystem entries (zroot and zroot/ROOT).

All my servers have zroot mounted on / and that's also the main drive the system boots from.

If this applies to your setup as well then you could consider setting canmount to the default value: # zfs set canmount=on (also add the root ZFS filesystem to that command).

(edit): made a small mistake, property canmount can only be on, off or noauto.
 
How did you install ZFS? Using the BSD installer? Because then I have a very good hunch what could be going on. Try using zfs get canmount on the main root file system. My guess is that it has the noauto value which means so much that it won't be automatically mounted during boot. Not using # zfs mount -a anyway.

I'm not quite sure why they set it up like that, but then again, I also fail to understand the need to add several unused filesystems as well. Now hinting at zroot/ROOT/default which is mounted on / by default. All I see there are 2 unused (and therefor wasted) filesystem entries (zroot and zroot/ROOT).

All my servers have zroot mounted on / and that's also the main drive the system boots from.

If this applies to your setup as well then you could consider setting canmount to the default value: # zfs set canmount=on (also add the root ZFS filesystem to that command).

(edit): made a small mistake, property canmount can only be on, off or noauto.

Yes, I did use the bsd installer for this, I use default config, I did only change "gpt" for "mbr".

I did try with "zfs set canmount=on ZROOT" and "zfs set canmount=on bootpool" but it's dont works...

My "zfs get canmount" is:

Code:
NAME                PROPERTY  VALUE     SOURCE
bootpool            canmount  on        local
zroot               canmount  on        local
zroot/ROOT          canmount  on        local
zroot/ROOT/default  canmount  on        local
zroot/tmp           canmount  on        default
zroot/usr           canmount  off       local
zroot/usr/home      canmount  on        default
zroot/usr/ports     canmount  on        default
zroot/usr/src       canmount  on        default
zroot/var           canmount  off       local
zroot/var/audit     canmount  on        default
zroot/var/crash     canmount  on        default
zroot/var/log       canmount  on        default
zroot/var/mail      canmount  on        default
zroot/var/tmp       canmount  on        default
 
Any ideas? I can not fix it.

Have you had any luck snake?

I've just installed FreeBSD 11.0 with MBR as well to solve a separate issue and now i've come up with this problem.

Exactly the same issue as what your having.. No kernel and zpool import is showing bootpool not loaded.


Feeling frustrated right now :eek:
 
Any ideas? I can not fix it.
Late reaction, I overlooked the initial alert.

So, two things coming to mind: why do you have 2 ZFS pools in there? That seems off to me, especially considering your previous statement that the installer set this up. To my knowledge the installer would not try to cram 2 pools onto one environment like this (edit: I could be wrong, would have to test).

More importantly: # zpool get bootfs bootpool, what does that tell you?

Right now I can only guess what bootpool is supposed to be, but it doesn't look good to me. I'm guessing that this is initially used to boot your system, but because the rest of the OS is missing from there (probably located on zroot) it won't continue.
 
Code:
root@Worm:/ # zpool status
  pool: bootpool
 state: ONLINE
  scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        bootpool     ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            ada0s1a  ONLINE       0     0     0
            ada1s1a  ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        zroot        ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            ada0s1d  ONLINE       0     0     0
            ada1s1d  ONLINE       0     0     0

errors: No known data errors

Code:
root@Worm:/ # gpart show -p
=>       63  976770992    ada0  MBR  (466G)
         63          1          - free -  (512B)
         64  976770984  ada0s1  freebsd  [active]  (466G)
  976771048          7          - free -  (3.5K)

=>        0  976770984   ada0s1  BSD  (466G)
          0    4194304  ada0s1a  freebsd-zfs  (2.0G)
    4194304    4194304  ada0s1b  freebsd-swap  (2.0G)
    8388608  968382368  ada0s1d  freebsd-zfs  (462G)
  976770976          8           - free -  (4.0K)

=>       63  976773105    ada1  MBR  (466G)
         63          1          - free -  (512B)
         64  976773096  ada1s1  freebsd  [active]  (466G)
  976773160          8          - free -  (4.0K)

=>        0  976773096   ada1s1  BSD  (466G)
          0    4194304  ada1s1a  freebsd-zfs  (2.0G)
    4194304    4194304  ada1s1b  freebsd-swap  (2.0G)
    8388608  968384480  ada1s1d  freebsd-zfs  (462G)
  976773088          8           - free -  (4.0K)

Code:
root@Worm:/ # df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default    445G    667M    444G     0%    /
devfs                 1.0K    1.0K      0B   100%    /dev
zroot/tmp             444G    104K    444G     0%    /tmp
zroot/usr/home        444G    136K    444G     0%    /usr/home
zroot/usr/ports       445G    639M    444G     0%    /usr/ports
zroot/usr/src         444G     96K    444G     0%    /usr/src
zroot/var/audit       444G     96K    444G     0%    /var/audit
zroot/var/crash       444G     96K    444G     0%    /var/crash
zroot/var/log         444G    420K    444G     0%    /var/log
zroot/var/mail        444G     96K    444G     0%    /var/mail
zroot/var/tmp         444G     96K    444G     0%    /var/tmp
zroot                 444G     96K    444G     0%    /zroot
bootpool              1.9G    149M    1.8G     8%    /bootpool

Code:
root@Worm:/ # zpool get bootfs
NAME      PROPERTY  VALUE               SOURCE
bootpool  bootfs    -                   default
zroot     bootfs    zroot/ROOT/default  local


Is it odd that FreeeBSD installer creates a 2G partition for the bootpool?
 
Code:
root@Worm:/ # zpool get bootfs
NAME      PROPERTY  VALUE               SOURCE
bootpool  bootfs    -                   default
zroot     bootfs    zroot/ROOT/default  local

Is it odd that FreeeBSD installer creates a 2G partition for the bootpool?
First: what FreeBSD version did you exactly install? What branch? Thing is: I can't reproduce the results with FreeBSD 11 RELENG (in other words: the regular public release). When you set up a guided ZFS installation on a GPT scheme then the system will simply create a regular slice of type freebsd-boot followed by the swap and finally the ZFS pool itself. Only one pool is created, named zroot.

So I'm definitely right when I say that this isn't exactly common and/or normal behavior.

Anyway, the problem is also identified (I think). Solely judging by the names my assumption is that the system tries to boot using bootpool. While it most likely only needs to be used to boot the main system. As such my suggestion would be to try and use # zpool set bootfs zroot/ROOT/default bootpool.
 
So I'm definitely right when I say that this isn't exactly common and/or normal behavior.
There are two scenarios where the installer will create a bootpool (https://svnweb.freebsd.org/changeset/base/302319):
  1. Booting via UEFI and installing on a GELI encrypted ZFS pool
  2. Selecting the MBR partition scheme during the install i.e. having a partition layout like Vossy.
I could reproduce the problem by selecting an MBR scheme instead of GPT with 11.0-RELEASE. In that case the bootpool is not mounted automatically. Turns out that this is a known problem: PR 212258.

Following the bread crumbs from the PR, I could work around the problem like this on a fresh install:
  1. zpool import -f bootpool
  2. Add to /boot/loader.conf (see Thread 42980)
    Code:
    zpool_cache_load="YES"
    zpool_cache_type="/boot/zfs/zpool.cache"
    zpool_cache_name="/boot/zfs/zpool.cache"
  3. shutdown -r now
  4. Check that zpool list now shows the bootpool
 
Code:
root@Bee:/home/bob# uname -a
FreeBSD Bee 11.0-RELEASE-p1 FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 01:43:23 UTC 2016     root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

So i started over again with a clean install using the GPT scheme and i get

Code:
root@Bee:/home/bob# gpart show
=>       40  976770976  ada0  GPT  (466G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048    4194304     2  freebsd-swap  (2.0G)
    4196352  972572672     3  freebsd-zfs  (464G)
  976769024       1992        - free -  (996K)

=>       40  976773088  ada1  GPT  (466G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048    4194304     2  freebsd-swap  (2.0G)
    4196352  972576768     3  freebsd-zfs  (464G)
  976773120          8        - free -  (4.0K)

Although its revenge of this again at the very start on bootup

Code:
Verifying DMI Pool Data ........................
gptzfsboot: error 128 lba 976771048
gptzfsboot: error 128 lba 1

BTX loader 1.00 BTX Version is 1.02
Consoles: internal video/keyboard
BIOS drive C: is disk0
BIOS drive D: is disk1

Should the above error be posted somewhere else / bug reported on GPT Scheme?
do i even have to worry about the error cause the system seems to have booted?

ps- Thanks tobik, i might try a MBR clean install with the fix.
 
Sorry for taking time to respond.

I finally reinstalled with ufs.
To my knowledge the installer would not try to cram 2 pools onto one environment like this (edit: I could be wrong, would have to test).

When I installed zfs, I let the installer configure it by default.
 
Back
Top