Solved Two pools with the same name created?

Greetings all,

as my previous posts show, I have been trying to bootstrap boot (no pun intended) from a USB drive and run the OS from a NVMe drive. Since I could not discern how to do it form the installer, I created a script (attached). After several iterations, the script finally finished, but when I rebooted, the zfs list showed:
Code:
zroot                       2.00G  12.0G    96K  /zroot
zroot/bootenv               2.00G  12.0G    96K  none
zroot/bootenv/default          2G  13.5G   534M  /
That is, the root pool zroot, was correctly installed onto the USB flash drive, but the OS pool system residing on the NVMe drive was not imported. Undaunted, I tried zpool import, which resulted in:
Code:
 pool: system
     id: 17079989330597717060
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
    devices and try again.
   see: http://illumos.org/msg/ZFS-8000-3C
config:

    system                 UNAVAIL  insufficient replicas
      5375734647068907476  UNAVAIL  cannot open

   pool: system
     id: 13386452794072540132
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

    system         ONLINE
      gpt/zfspool  ONLINE

I was able to force import of the correct (second) system pool by using numerical id zpool import 13386452794072540132, and I have the intended structure zfs list:
Code:
NAME                         USED  AVAIL  REFER  MOUNTPOINT
system                      2.99M   221G    96K  none
system/home                   96K   221G    96K  /home
system/usr                   680K   221G    96K  /usr
system/usr/etc                96K   221G    96K  /usr/local/etc
system/usr/local              96K   221G    96K  /ust/local
system/usr/ports             296K   221G   104K  /usr/ports
system/usr/ports/distfiles    96K   221G    96K  /usr/ports/distfiles
system/usr/ports/packages     96K   221G    96K  /usr/ports/packages
system/usr/src                96K   221G    96K  /usr/src
system/var                   976K   221G    96K  /var
system/var/audit              96K   221G    96K  /var/audit
system/var/crash              96K   221G    96K  /var/crash
system/var/db                192K   221G    96K  /var/db
system/var/db/pkg             96K   221G    96K  /var/db/pkg
system/var/log               140K   221G   140K  /var/log
system/var/mail               96K   221G    96K  /var/mail
system/var/run               164K   221G   164K  /var/run
system/var/tmp                96K   221G    96K  /var/tmp
zroot                       2.00G  12.0G    96K  /zroot
zroot/bootenv               2.00G  12.0G    96K  none
zroot/bootenv/default          2G  13.5G   534M  /
df -h:
Code:
Filesystem                    Size    Used   Avail Capacity  Mounted on
zroot/bootenv/default          14G    534M     14G     4%    /
devfs                         1.0K    1.0K      0B   100%    /dev
tmpfs                          13G    4.0K     13G     0%    /tmp
zroot                          12G     96K     12G     0%    /zroot
system/usr/ports              221G    104K    221G     0%    /usr/ports
system/var/crash              221G     96K    221G     0%    /var/crash
system/home                   221G     96K    221G     0%    /home
system/usr/src                221G     96K    221G     0%    /usr/src
system/var/audit              221G     96K    221G     0%    /var/audit
system/var/log                221G    140K    221G     0%    /var/log
system/var/db                 221G     96K    221G     0%    /var/db
system/usr/etc                221G     96K    221G     0%    /usr/local/etc
system/var/tmp                221G     96K    221G     0%    /var/tmp
system/var/run                221G    164K    221G     0%    /var/run
system/var/mail               221G     96K    221G     0%    /var/mail
system/usr/local              221G     96K    221G     0%    /ust/local
system/usr/ports/distfiles    221G     96K    221G     0%    /usr/ports/distfiles
system/var/db/pkg             221G     96K    221G     0%    /var/db/pkg
system/usr/ports/packages     221G     96K    221G     0%    /usr/ports/packages

I am unable to destroy the "bogus" pool, although the message advises me to do so. I suspect that the problem with the two pools creating lies with the script, I may not quite understand the altroot concept, so I was wondering if someone smarter than I could review the script and let me know what needs to be changed.

Kindest regards,

M
 

Attachments

The primary problem I can see is that system is never actually imported into the environment you eventually boot into. It’s only imported in the install environment that you install from. Not sure if there’s a solution to this directly from the install process. I’m not 100% certain on how the os knows which pools should be imported. You don’t get this issue with a single pool as the boot pool is imported automatically due to the system using it to boot.

the duplicate issue seems most likely due to previous efforts to get the install working. I can’t see any obvious problem with the script. It only creates one system pool. Note that when you destroy and create partitions, it doesn’t affect data, so if the offsets are the same, it’s entirely possible for the os to find ZFS labels inside partitions from a previous attempt. I’d be surprised to see two system pools if you did this with completely blank disks.
 
Hi usdmatt,

thank you for the reply.

I’m not 100% certain on how the os knows which pools should be imported. You don’t get this issue with a single pool as the boot pool is imported automatically due to the system using it to boot.
Maybe due to my ignorance, I do not see it as a problem. Once the system boots and imports the zpool, I can then import the system. After that time, both pools are correctly imported. Do I miss something?
Could I not set the mount-point for the system at the end of the script?

Note that when you destroy and create partitions, it doesn’t affect data, so if the offsets are the same, it’s entirely possible for the os to find ZFS labels inside partitions from a previous attempt.

Regarding the duplicate pools, I am not sure whether your hypothesis is correct. The reason is, that when I run the script, it refuses to destroy the GEOM, i.e., the partitions as you stated it, until I first destroy the pool. Furthermore, the problem is not with the zroot. Nevertheless, I will try to delete the pools and re-run the script.

I think that there must be a solution, in the past, people were installing /var, /usr onto different drives, so it stands to reason that this is a conceptually similar issue. Furthermore, on this forum people have ZFS pool on one drive and either root /, or just /boot on a second drive.

Kindest regards,

M
 
Greetings all,

I do not think that the script does what I want to do, inter alia, move the /usr. /var to a different pool. If I cd /usr the entire structure as described in hier(7) is there.

Back to the drawing board.

Kindest regards,

M
 
Back
Top