ZFS No pools available to import ; can't open /dev/ada0p1

Anyway,I understand that to be able to fix a so broken installation goes beyond my and your abilities. I think that the time to make a fresh reinstallation of FreeBSD 13.2 has arrived. Anyway I don't want to give up with ZFS. I will use it again and once it will be ready,I will copy there the old files that I have backupped previously.
 
ok man,but this is the only thing that I could do,since I don't understand the reason why they aren't there as such I don't know how they can be regenerated automatically.
 
This seems to be the real problem :


Screenshot_2023-12-04_14-13-50.jpg
 
Is it possible that the loader (bios or efi) is older than the zpools? I think you get this same error if you have an upgraded pool but are trying to use an old loader that doesn't understand the new version on the pool.

If you stop in the loader, isn't there a way to list the potential boot pools? Asking because it's been a while since I've had to do this.
 
I've found the solution. Never do something like this :

Code:
zfs set canmount=on zroot/usr

you will get the same errors that I've got. I've been able to replicate the previous scenario. If you need to change the content of the usr directory while you are accessing the zpool from outside,you can do it,but when the modifications have been done,it should be set up again to off. Try.
 
Not sure what's going on, but I believe that zroot/usr and zroot/var should be canmount=off.
i don't know why it is like that and can't remember how i installed it but a clean install has indeed usr and var with canmount=off
 
If you need to change the content of the usr directory while you are accessing the zpool from outside,you can do it,but when the modifications have been done,it should be set up again to off.
The real usr directory is in the root filesystem (something like zroot/ROOT/whatever). That's what you should mount and modify. zroot/usr is a "placeholder" filesystem that should never be mounted and it exists solely for being able to create filesystem like zroot/usr/ports, zroot/user/src, etc.
 
I resolved all my ZFS issues by using UFS instead :)
Well, in my case it does make sense because I only have one SSD, no partition redundancy or any other configuration that could make sense to use ZFS.
Sure ZFS also works but I had some bad experience with it and lost all my data twice because of something I don't know what I did or happened... and yes... it happened twice and because there was no redundancy maybe... yep that was it.
Yeah I was dumb enough to make the same mistake twice. 😂
Someday I will give it another try but for now UFS seems quite stable to me.
So my point here in this post is does it really make sense to use ZFS in your configuration too?
Think about that.
 
I run ZFS on systems with a single SSD for one reason: Boot Environments. Upgrades are pretty safe and easy to do, but for me BEs give an extra level of warm and fuzzy. Going across releases like 13.2 to 14.0, BEs make even more sense; one create a new BE, chroot/mount it, do the upgrade including all the packages, activate the new BE and reboot once.

But for some, UFS is a better choice on their systems.
 
I resolved all my ZFS issues by using UFS instead :)
Well, in my case it does make sense because I only have one SSD, no partition redundancy or any other configuration that could make sense to use ZFS.
Sure ZFS also works but I had some bad experience with it and lost all my data twice because of something I don't know what I did or happened... and yes... it happened twice and because there was no redundancy maybe... yep that was it.
Yeah I was dumb enough to make the same mistake twice. 😂
Someday I will give it another try but for now UFS seems quite stable to me.
So my point here in this post is does it really make sense to use ZFS in your configuration too?
Think about that.

I totally understand your point. But I'm here to learn and ZFS is a great challenge. To give up it means to stop training my mind by doing "problem solving" : and I need to train my mind everyday or it will got sick.
 
I have played with the canmount property and I've set everything to on ?

Certainly, that's very wrong, it'll break things. In addition to what you already discovered (above), please see the Boot environment structures section of the manual page for bectl(8).


This seems to be the real problem :

It's normal to see kernel: no pools available to import after (not immediately after) the boot pool is imported.
 
… I believe that zroot/usr and zroot/var should be canmount=off.

Not always, please see the manual page for bectl(8) (NB above, the current breakage of the online view for RELEASE).

Commits of interest include:



vermaden FYI
 
Too much technical for me. There is a limit where I'm not able to go with some of mine selective comprehension skills.
 
Not always, please see the manual page for bectl(8) (NB above, the current breakage of the online view for RELEASE).
Not always, yes. Especially if you create your pool structure yourself.
But usually it makes sense to have unmountable pool/usr or pool/ROOT/be/usr unless you want /usr/bin and /usr/sbin to be separate filesystem(s) from your /.
You can have that when using "deep BEs", but still most people don't need that.

However, I have a habit of having /usr/local be a part of a BE, but also be a separate filesystem.
Here is from one of my systems where I have been using "deep BEs" for a long time:
Code:
# zfs list -r -o name,canmount,mounted,mountpoint rpool/ROOT
NAME                                    CANMOUNT  MOUNTED  MOUNTPOINT
rpool/ROOT                              noauto    no       /rpool/ROOT
rpool/ROOT/2023-04-13                   noauto    yes      /rpool/ROOT/2023-04-13
rpool/ROOT/2023-04-13/usr               off       no       /rpool/ROOT/2023-04-13/usr
rpool/ROOT/2023-04-13/usr/compat        noauto    yes      /rpool/ROOT/2023-04-13/usr/compat
rpool/ROOT/2023-04-13/usr/compat/linux  noauto    yes      /rpool/ROOT/2023-04-13/usr/compat/linux
rpool/ROOT/2023-04-13/usr/lib           off       no       /rpool/ROOT/2023-04-13/usr/lib
rpool/ROOT/2023-04-13/usr/lib/debug     noauto    yes      /rpool/ROOT/2023-04-13/usr/lib/debug
rpool/ROOT/2023-04-13/usr/local         noauto    yes      /rpool/ROOT/2023-04-13/usr/local
rpool/ROOT/2023-04-13/usr/local/etc     noauto    yes      /rpool/ROOT/2023-04-13/usr/local/etc
 
… I have a habit of having /usr/local be a part of a BE, but also be a separate filesystem. …

+1

(That's within the deep boot environment example in the manual page.)

The mountpoint in your example puzzles me:

/rpool/ROOT/2023-04-13/usr/local

Should it not be /usr/local?
 
The mountpoint in your example puzzles me:

/rpool/ROOT/2023-04-13/usr/local

Should it not be /usr/local?
It could be either. The script that's responsible for mounting subordinate filesystems ( rc.d/zfsbe) handles both cases.
For me it was easier to just leave mountpoint property at default / inherited value.
 
Back
Top