ZFS No pools available to import ; can't open /dev/ada0p1

Hello.

I'm not able to boot my primary FreeBSD installation anymore :


Senza titolo.jpeg


It is stored on this disk :

Code:
Disk /dev/sda: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: CT500MX500SSD4 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C4E17451-AE72-11EC-9419-E0D55EE21F22

Device       Start       End   Sectors   Size Type
/dev/sda1       40    532519    532480   260M EFI System
/dev/sda2   532520    533543      1024   512K FreeBSD boot
/dev/sda3   534528   4728831   4194304     2G FreeBSD swap
/dev/sda4  4728832 976773119 972044288 463.5G FreeBSD ZFS

I went on Linux to check what could have happened to the ZFS / zpool structure :

Code:
# zpool import -f -R /mnt/zroot zroot

# zpool list

NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot   460G   425G  35.1G        -         -    55%    92%  1.00x    ONLINE  /mnt/zroot

# ls

_13.2_CURRENT_  boot     build-xen  dev   kernels  lib-backup  mnt  proc    sbin   tmp  vms
bhyve           boot-bo  compat     etc   lib      libexec     net  rescue  share  usr  _ZFS_
bin             build    data       home  lib64    media       opt  root    sys    var  zroot

# zfs list

NAME                                                                 USED  AVAIL  REFER  MOUNTPOINT
zroot                                                                425G  20.8G    96K  /mnt/zroot/
zroot
zroot/ROOT                                                           345G  20.8G    96K  none
zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731                         268M  20.8G   136G  /mnt/zroot
zroot/ROOT/13.1-RELEASE-p5_2023-01-20_181957                         344G  20.8G   292G  /mnt/zroot
zroot/ROOT/13.1-RELEASE-p5_2023-01-20_181957@2023-01-20-18:19:57-0  52.1G      -   140G  -
zroot/tmp                                                            198M  20.8G   198M  /mnt/zroot/
tmp
zroot/usr                                                           77.6G  20.8G   120K  /mnt/zroot/
usr
zroot/usr/home                                                      63.4G  20.8G  63.4G  /mnt/zroot/
usr/home
zroot/usr/ports                                                     14.2G  20.8G  14.2G  /mnt/zroot/
usr/ports
zroot/usr/src-old                                                     96K  20.8G    96K  /mnt/zroot/
usr/src-old
zroot/var                                                           2.46G  20.8G   136K  /mnt/zroot/
var
zroot/var/audit                                                       96K  20.8G    96K  /mnt/zroot/
var/audit
zroot/var/crash                                                     1.11G  20.8G  1.11G  /mnt/zroot/
var/crash
zroot/var/log                                                       4.75M  20.8G  4.75M  /mnt/zroot/
var/log
zroot/var/mail                                                      1.33G  20.8G  1.33G  /mnt/zroot/
var/mail
zroot/var/tmp                                                       18.1M  20.8G  18.1M  /mnt/zroot/
var/tmp

my /boot/loader.conf :

Code:
#currdev="zfs:zroot/ROOT/13.1-RELEASE-p5_2023-01-20_181957"
#vfs.root.mountfrom="zroot/ROOT/13.1-RELEASE-p5_2023-01-20_181957"
#currdev="zfs:zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731"
#opensolaris_load="YES"
#loaddev="disk2p1:"
loader_logo="daemon"
vmm_load="YES"
nmdm_load="YES"
if_tap_load="YES"
if_bridge_load="YES"
bridgestp_load="YES"
fusefs_load="YES"
tmpfs_load="YES"
verbose_loading="YES"
pptdevs="2/0/0 2/0/1 2/0/2 2/0/3"
kern.geom.label.ufsid.enable="1"
cryptodev_load="YES"
zfs_load="YES"
kern.racct.enable="1"
aio_load="YES"
vboxdrv_load="YES"
kern.cam.scsi_delay="10000"
fdescfs_load="YES"
linprocfs_load="YES"
linsysfs_load="YES"

I have also tried to uncomment these lines :

Code:
currdev="zfs:zroot/ROOT/13.1-RELEASE-p5_2023-01-20_181957"
or
currdev="zfs:zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731"

and / or

loaddev="disk2p1:"

but I've got the same problem. I don't understand what could be the problem.
 

Attachments

  • Istantanea_2023-12-03_11-09-16.jpg
    Istantanea_2023-12-03_11-09-16.jpg
    12.8 KB · Views: 40
  • Istantanea_2023-12-03_11-21-54.jpg
    Istantanea_2023-12-03_11-21-54.jpg
    87.8 KB · Views: 39
looks like you removed more than the EFI part
also check /dev/diskid/
see you have your device nodes there .
 
Code:
# zpool import -f -R /mnt/zroot zroot
# cd /mnt/zroot/dev
# ls
null

For sure I didn't remove anything inside /dev ; why should I do something like that ?
 
what the hell is happened ?

Code:
# cd /mnt/zroot/usr#
# ls
home  ports  src  src-  src-old

a lot of files are missing even there. Maybe these missing is related to the fact that I have played with the canmount property and I've set everything to on ?

Code:
# zfs get -r canmount zroot

NAME                                                                PROPERTY  VALUE     SOURCE
zroot                                                               canmount  on        local
zroot/ROOT                                                          canmount  on        local
zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731                        canmount  on        local
zroot/ROOT/13.1-RELEASE-p5_2023-01-20_181957                        canmount  on        local
zroot/ROOT/13.1-RELEASE-p5_2023-01-20_181957@2023-01-20-18:19:57-0  canmount  -         -
zroot/tmp                                                           canmount  on        default
zroot/usr                                                           canmount  on        local
zroot/usr/home                                                      canmount  on        default
zroot/usr/ports                                                     canmount  on        default
zroot/usr/src-old                                                   canmount  on        default
zroot/var                                                           canmount  on        local
zroot/var/audit                                                     canmount  on        local
zroot/var/crash                                                     canmount  on        local
zroot/var/log                                                       canmount  on        local
zroot/var/mail                                                      canmount  on        local
zroot/var/tmp                                                       canmount  on        local
 
i was talking about fstab
removing only the boot/efi part from fstab wont cause any boot problems on 99.999% of the systems
 
Please refresh the page. I wanna try to restore the correct values for canmount. can you tell me what are by default the values that should not be set to on ? thanks.
 
these are on on one 13.2 zfs on root system

zroot canmount on default
zroot/ROOT canmount on default
zroot/backups canmount on default
zroot/compat canmount on default
zroot/tmp canmount on default
zroot/usr canmount on local
zroot/usr/home canmount on default
zroot/usr/ports canmount on default
zroot/usr/src canmount on default
zroot/var canmount on local
zroot/var/audit canmount on default
zroot/var/crash canmount on default
zroot/var/log canmount on default
zroot/var/mail canmount on default
zroot/var/tmp canmount on default
 
can't be caused by that,because you have on everywhere and you don't have the problems that I have.
 
The missing files could be stored on one of these snapshots ?

zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731
zroot/ROOT/13.1-RELEASE-p5_2023-01-20_181957
zroot/ROOT/13.1-RELEASE-p5_2023-01-20_181957@2023-01-20-18:19:57-0

into which snapshot I enter when I import the zpool from Linux with this command ?

# zpool import -f -R /mnt/zroot zroot
 
ZFS : the most tricky technology all around. It's not for everyone because it is a source of repeated problems (maybe for this reason it has born btrfs ?). It should be studied well,because it is no intuitive. Now,there are users who wants to do that,other can't or don't. So,I think it has not been designed having the concept of usability in mind. I'm always tempted to convert it to ufs.
 
I think part of asking for help is "how did you get here". I've read both threads you've posted and I have no idea on exactly what you did.
My opinion:
ZFS is no more tricky than any other filesystems. From the other thread it seems like you mounted an ISO image as memory device and then tried importing ZFS pools from that and now your original ZFS install is broken. Is that a reasonable summary?

At this point in time, if you won't lose data, reinstalling with UFS for at least the root/boot filesystem would be easiest.
 
I have tried to emulate your problem, I have had the same problem that I think you have, the content of directories like /dev is not shown and what is created, I don't understand it, I solved it this way.

Code:
root@xv0:~ # truncate -s4gb disk0
root@xv0:~ # mdconfig -a -t vnode -f disk0 -u md0
root@xv0:~ # zpool create -R /mnt v2zroot md0
root@xv0:~ # zfs snap -r zroot@snap
root@xv0:~ # zfs list -t snap
NAME                      USED  AVAIL  REFER  MOUNTPOINT
zroot@snap                  0B      -    96K  -
zroot/ROOT@snap             0B      -    96K  -
zroot/ROOT/default@snap     0B      -   787M  -
zroot/home@snap             0B      -    96K  -
zroot/tmp@snap              0B      -   104K  -
zroot/usr@snap              0B      -    96K  -
zroot/usr/ports@snap        0B      -    96K  -
zroot/usr/src@snap          0B      -    96K  -
zroot/var@snap              0B      -    96K  -
zroot/var/audit@snap        0B      -    96K  -
zroot/var/crash@snap        0B      -    96K  -
zroot/var/log@snap          0B      -   152K  -
zroot/var/mail@snap         0B      -    96K  -
zroot/var/tmp@snap          0B      -    96K  -
root@xv0:~ # zfs send -Rv zroot@snap | zfs recv -Fu v2zroot
root@xv0:~ # zfs mount -a
root@xv0:~ # zfs list -r v2zroot
NAME                   USED  AVAIL  REFER  MOUNTPOINT
v2zroot                790M  2.85G    96K  /mnt/zroot
v2zroot/ROOT           787M  2.85G    96K  none
v2zroot/ROOT/default   787M  2.85G   787M  /mnt
v2zroot/home            96K  2.85G    96K  /mnt/home
v2zroot/tmp            104K  2.85G   104K  /mnt/tmp
v2zroot/usr            288K  2.85G    96K  /mnt/usr
v2zroot/usr/ports       96K  2.85G    96K  /mnt/usr/ports
v2zroot/usr/src         96K  2.85G    96K  /mnt/usr/src
v2zroot/var            632K  2.85G    96K  /mnt/var
v2zroot/var/audit       96K  2.85G    96K  /mnt/var/audit
v2zroot/var/crash       96K  2.85G    96K  /mnt/var/crash
v2zroot/var/log        152K  2.85G   152K  /mnt/var/log
v2zroot/var/mail        96K  2.85G    96K  /mnt/var/mail
v2zroot/var/tmp         96K  2.85G    96K  /mnt/var/tmp
root@xv0:~ # df -h |grep -e ^v2zroot
v2zroot/tmp           2.9G    104K    2.9G     0%    /mnt/tmp
v2zroot/usr/ports     2.9G     96K    2.9G     0%    /mnt/usr/ports
v2zroot/var/crash     2.9G     96K    2.9G     0%    /mnt/var/crash
v2zroot/var/audit     2.9G     96K    2.9G     0%    /mnt/var/audit
v2zroot/var/mail      2.9G     96K    2.9G     0%    /mnt/var/mail
v2zroot               2.9G     96K    2.9G     0%    /mnt/zroot
v2zroot/var/tmp       2.9G     96K    2.9G     0%    /mnt/var/tmp
v2zroot/var/log       2.9G    152K    2.9G     0%    /mnt/var/log
v2zroot/usr/src       2.9G     96K    2.9G     0%    /mnt/usr/src
v2zroot/home          2.9G     96K    2.9G     0%    /mnt/home
root@xv0:~ # mkdir -p /mnt2
root@xv0:~ # zfs create -o mountpoint=/mnt2 v2zroot/tempmnt
root@xv0:~ # zfs send zroot/ROOT/default@snap | zfs recv -F v2zroot/tempmnt
root@xv0:/mnt/mnt2 # zfs list -r v2zroot
NAME                   USED  AVAIL  REFER  MOUNTPOINT
v2zroot               1.54G  2.08G    96K  /mnt/zroot
v2zroot/ROOT           787M  2.08G    96K  none
v2zroot/ROOT/default   787M  2.08G   787M  /mnt
v2zroot/home            96K  2.08G    96K  /mnt/home
v2zroot/tempmnt        787M  2.08G   787M  /mnt/mnt2
v2zroot/tmp            104K  2.08G   104K  /mnt/tmp
v2zroot/usr            288K  2.08G    96K  /mnt/usr
v2zroot/usr/ports       96K  2.08G    96K  /mnt/usr/ports
v2zroot/usr/src         96K  2.08G    96K  /mnt/usr/src
v2zroot/var            632K  2.08G    96K  /mnt/var
v2zroot/var/audit       96K  2.08G    96K  /mnt/var/audit
v2zroot/var/crash       96K  2.08G    96K  /mnt/var/crash
v2zroot/var/log        152K  2.08G   152K  /mnt/var/log
v2zroot/var/mail        96K  2.08G    96K  /mnt/var/mail
v2zroot/var/tmp         96K  2.08G    96K  /mnt/var/tmp
root@xv0:/mnt/mnt2 # ls /dev/
acpi            audit           ctty            geom.ctl        kmem            music0          pts             sysmouse        ttyv6           ugen1.1
ada0            auditpipe       devctl          gpt             log             netdump         random          tcp_log         ttyv7           uinput
ada0p1          bpf             devctl2         input           md0             netmap          reroot          ttyv0           ttyv8           urandom
ada0p2          bpf0            devstat         io              mdctl           null            sequencer0      ttyv1           ttyv9           usb
ada0p3          bpsm0           dumpdev         kbd0            mem             pass0           sndstat         ttyv2           ttyva           usbctl
apm             console         fd              kbd1            midistat        pci             stderr          ttyv3           ttyvb           xpt0
apmctl          consolectl      fido            kbdmux0         mixer0          pfil            stdin           ttyv4           ufssuspend      zero
atkbd0          crypto          full            klog            mlx5ctl         psm0            stdout          ttyv5           ugen0.1         zfs
root@xv0:/mnt/mnt2 #

In order to see the content of the system root located in zroot/ROOT/default, I had to create it separately, it didn't work with the recursive send and receive, I don't understand it, but with that I solved being able to access the data, I guess that's it your problem.

If anyone knows the reason for the error I have had, please tell me.
 
Did you access to your data?

Code:
root@xv0:~ # zpool import
   pool: tszroot
     id: 1547883659506641918
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        tszroot     ONLINE
          md0       ONLINE
root@xv0:~ # zpool import -R /mnt tszroot seczroot
root@xv0:~ # zfs list -r seczroot
NAME                    USED  AVAIL  REFER  MOUNTPOINT
seczroot                790M  2.85G    96K  /mnt/zroot
seczroot/ROOT           787M  2.85G    96K  none
seczroot/ROOT/default   787M  2.85G   787M  /mnt
seczroot/home            96K  2.85G    96K  /mnt/home
seczroot/tmp            104K  2.85G   104K  /mnt/tmp
seczroot/usr            288K  2.85G    96K  /mnt/usr
seczroot/usr/ports       96K  2.85G    96K  /mnt/usr/ports
seczroot/usr/src         96K  2.85G    96K  /mnt/usr/src
seczroot/var            632K  2.85G    96K  /mnt/var
seczroot/var/audit       96K  2.85G    96K  /mnt/var/audit
seczroot/var/crash       96K  2.85G    96K  /mnt/var/crash
seczroot/var/log        152K  2.85G   152K  /mnt/var/log
seczroot/var/mail        96K  2.85G    96K  /mnt/var/mail
seczroot/var/tmp         96K  2.85G    96K  /mnt/var/tmp
root@xv0:~ # df -h |grep ^seczroot
seczroot/usr/ports    2.9G     96K    2.9G     0%    /mnt/usr/ports
seczroot/var/log      2.9G    152K    2.9G     0%    /mnt/var/log
seczroot              2.9G     96K    2.9G     0%    /mnt/zroot
seczroot/home         2.9G     96K    2.9G     0%    /mnt/home
seczroot/var/audit    2.9G     96K    2.9G     0%    /mnt/var/audit
seczroot/var/tmp      2.9G     96K    2.9G     0%    /mnt/var/tmp
seczroot/usr/src      2.9G     96K    2.9G     0%    /mnt/usr/src
seczroot/var/crash    2.9G     96K    2.9G     0%    /mnt/var/crash
seczroot/var/mail     2.9G     96K    2.9G     0%    /mnt/var/mail
seczroot/tmp          2.9G    104K    2.9G     0%    /mnt/tmp
root@xv0:~ # zfs snap seczroot/ROOT/default@snap
root@xv0:~ # zfs send seczroot/ROOT/default | zfs recv seczroot/temproot
root@xv0:~ # df -h |grep ^seczroot/tem
seczroot/temproot     2.9G    787M    2.1G    27%    /mnt/zroot/temproot
root@xv0:~ # ls -l /mnt/zroot/temproot/
total 124
-rw-r--r--   2 root wheel 1011 Nov 10 09:11 .cshrc
-rw-r--r--   2 root wheel  495 Nov 10 09:11 .profile
-r--r--r--   1 root wheel 6109 Nov 10 09:49 COPYRIGHT
drwxr-xr-x   2 root wheel   49 Nov 10 09:11 bin
drwxr-xr-x  14 root wheel   70 Dec  3 12:58 boot
drwxr-xr-x   2 root wheel    2 Dec  3 01:18 dev
-rw-------   1 root wheel 4096 Dec  3 19:00 entropy
drwxr-xr-x  30 root wheel  110 Dec  3 11:37 etc
drwxr-xr-x   2 root wheel    2 Dec  3 01:18 home
drwxr-xr-x   4 root wheel   78 Nov 10 09:17 lib
drwxr-xr-x   3 root wheel    5 Nov 10 09:11 libexec
drwxr-xr-x   2 root wheel    2 Nov 10 08:48 media
drwxr-xr-x   3 root wheel    3 Dec  3 19:39 mnt
drwxr-xr-x   7 root wheel    7 Dec  3 01:46 mnt2
drwxr-xr-x   2 root wheel    2 Nov 10 08:48 net
dr-xr-xr-x   2 root wheel    2 Nov 10 08:48 proc
drwxr-xr-x   2 root wheel  150 Nov 10 09:15 rescue
drwxr-x---   2 root wheel   10 Dec  3 19:41 root
drwxr-xr-x   2 root wheel  150 Nov 10 09:44 sbin
lrwxr-xr-x   1 root wheel   11 Nov 10 08:48 sys -> usr/src/sys
drwxr-xr-x   2 root wheel    2 Dec  3 19:00 tmp
drwxr-xr-x  15 root wheel   15 Nov 10 10:02 usr
drwxr-xr-x  24 root wheel   24 Dec  3 19:00 var
drwxr-xr-x   2 root wheel    2 Dec  3 01:18 zroot

I think it is your scenario, but as you can see seczroot/ROOT/default is not mounted, that is why it cannot access the data of /, I have solved it that way, I suppose there is another better method, or the explanation of Why is seczroot/ROOT/default not mounted automatically? Some more experienced user could tell us, but by following those steps I managed to access my data, you?
 
To me the (cut off) screenshot error messages point to something more rudimentary was messed up than a ZFS problem:
partions are simply recognized, only.

I don't know what happened, so I just assume some possible causes as a brainstorming for that - could be others causes of course:

- may be the partition table (GPT block - GPT here = GUID Partition Table, not some AI stuff 😝 ) or the contents of the UEFI partition were fucked up by accidentally starting another system's installation tool (Microsoft Linux [nice typo, to forget the ',' 😂)

- may be you've tried to clone a former, smaller disk (e.g. 500G) with dd to a larger disk (e.g. 1T)
which with MBR would work fine, because MBR has one single block at the beginning of a disk containing all partition's information.
GPT has an additional header at the end of the disk, which with dd would end up somewhere in the middle.
With MBR it doesn't matter if after the last partition unformatted space exists on a disk.
With GPT any tool capable of detecting partitions on a disk will see there are such on the disk, but because there is no GPT header at the end, the GPT partitions cannot be mounted.

As I said, just for brainstorming, because to me it seems anything likewise could have happened.

However, since I read a lot about differences in Linux handling ZFS my advice would be not to tinker with any Linux live system on that disk (look, but no touch :cool:) ,
but use a FreeBSD live system (e.g. just boot FreeBSD's installation USB-stick into live system mode.)
You may not get a fancy desktop environment but a shell only, but a sufficient powerful FreeBSD system with enough tools to clean up the worst.
 
If you can update you fstab, just comment out the lines for efi and the swap.
Also in single user mode, you should be able to remount your zfs pool like this (this was before boot environment, I haven't tested since):
Code:
zpool set readonly=off zroot
zfs mount -a
And then edit your fstab.
Also what is your setup ? How many hard drive do you have connected ?
 
I've populated the /dev directory with the correct nodes taken from an old backup that I had from the same but a little older installation,but I saw the same errors. So the real damage is not on /dev,but somewhere else. The missing nodes on /dev is only a consequence of those errors.
 
I've populated the /dev directory with the correct nodes taken from an old backup that I had from the same but a little older installation,but I saw the same errors. So the real damage is not on /dev,but somewhere else. The missing nodes on /dev is only a consequence of those errors.
the "files" under /dev are no real files, you cannot copy those from another installation.
 
Back
Top