ZFS Can't rename a ZFS pool because it was previously in use from another system...

I've used the command :

Code:
zpool import zroot3 zroot1

this command is able to rename the zpool,but something is missing there,because the system crashed and nothing worked anymore.
 
The job is not done. Please give a look below,this is how it looks like,after the reguid and the zpool rename :

Code:
root@Z390-AORUS-PRO-DEST:/home/ziomario/Scrivania# zpool import
   pool: zroot2
     id: 17629264177669490151
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        zroot2      ONLINE
          sdh       ONLINE

   pool: zroot1
     id: 15697395870475046810
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot1      ONLINE
          sdi       ONLINE

   pool: zroot3
     id: 15697395870475046810
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot3      ONLINE
          sdi       ONLINE

now the two conflicting zpools can cohexist,but they point to the same disk,sdi and that's not good.

Code:
root@Z390-AORUS-PRO-DEST:/home/ziomario/Scrivania# zdb -l /dev/sda4
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'zroot3'
    state: 0
    txg: 362903
    pool_guid: 7607196024616605116
    errata: 0
    hostname: 'marietto'
    top_guid: 8357560681389834947
    guid: 8357560681389834947
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 8357560681389834947
        path: '/dev/gpt/zfs0'
        phys_path: 'id1,enc@n3061686369656d30/type@0/slot@2/elmdesc@Slot_01/p4'
        whole_disk: 1
        metaslab_array: 256
        metaslab_shift: 32
        ashift: 12
        asize: 497681956864
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3 

root@Z390-AORUS-PRO-DEST:/home/ziomario/Scrivania# zdb -l /dev/sdi4
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'zroot1'
    state: 0
    txg: 110489
    pool_guid: 15697395870475046810
    errata: 0
    hostid: 2425734838
    hostname: 'Z390-AORUS-PRO-DEST'
    top_guid: 8357560681389834947
    guid: 8357560681389834947
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 8357560681389834947
        path: '/dev/sdh4'
        phys_path: 'id1,enc@n3061686369656d30/type@0/slot@2/elmdesc@Slot_01/p4'
        whole_disk: 1
        metaslab_array: 256
        metaslab_shift: 32
        ashift: 12
        asize: 497681956864
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3
 
The job is not done. Please give a look below,this is how it looks like,after the reguid and the zpool rename :

Code:
root@Z390-AORUS-PRO-DEST:/home/ziomario/Scrivania# zpool import
   pool: zroot2
     id: 17629264177669490151
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        zroot2      ONLINE
          sdh       ONLINE

   pool: zroot1
     id: 15697395870475046810
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot1      ONLINE
          sdi       ONLINE

   pool: zroot3
     id: 15697395870475046810
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot3      ONLINE
          sdi       ONLINE

now the two conflicting zpools can cohexist,but they point to the same disk,sdi and that's not good.
Do not rush with these imports. Show us
zdb -l information of all the ZFS partitions and
zpool status

It is hard to follow, what are you exactly doing.
 
Code:
root@Z390-AORUS-PRO-DEST:/home/ziomario/Scrivania# zpool status
no pools available

and reload the page...
 
Code:
root@Z390-AORUS-PRO-DEST:/home/ziomario/Scrivania# zpool import

   pool: zroot2
     id: 17629264177669490151
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        zroot2      ONLINE
          sdh       ONLINE

   pool: zroot1
     id: 15697395870475046810
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot1      ONLINE
          sdi       ONLINE

   pool: zroot3
     id: 15697395870475046810
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot3      ONLINE
          sdi       ONLINE
 
Code:
root@Z390-AORUS-PRO-DEST:/home/ziomario/Scrivania# zpool import

   pool: zroot2
     id: 17629264177669490151
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        zroot2      ONLINE
          sdh       ONLINE

   pool: zroot1
     id: 15697395870475046810
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot1      ONLINE
          sdi       ONLINE

   pool: zroot3
     id: 15697395870475046810
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot3      ONLINE
          sdi       ONLINE
And why did you do this root@Z390-AORUS-PRO-DEST:/mnt/zroot3# zpool import -f -R /mnt/zroot3 zroot3 ?

You had two zroot3-s, one mounted and the other not mounted. I advised you to requid the first and then import the other wih a new name.
 
I'm on Linux. The command zpool reguid zroot3 didn't work without importing it before :

Code:
root@Z390-AORUS-PRO-DEST:/home/ziomario/Scrivania# zpool reguid zroot3
cannot open 'zroot3': no such pool

now it didn't give any error :

Code:
root@Z390-AORUS-PRO-DEST:/mnt/zroot3# zpool import -f -R /mnt/zroot3 zroot3
root@Z390-AORUS-PRO-DEST:/mnt/zroot3# zpool reguid zroot3
OK.
 
And why did you do this root@Z390-AORUS-PRO-DEST:/mnt/zroot3# zpool import -f -R /mnt/zroot3 zroot3 ?

You had two zroot3-s, one mounted and the other not mounted. I advised you to requid the first and then import the other wih a new name.

Nope. I have renamed one of them. Now they are called zroot1 and zroot3 ; I have also reguid one of them. But I see that the reguid command didn't work.
 
I'm on Linux. The command zpool reguid zroot3 didn't work without importing it before :
The Linux comment I do not understand. Of course it does not reguid the pool, which is not imported. The idea was reguid the pool which was already imported.

For clarity (but I have no more time today):

geom disk list
gpart show
zdb -l <for all the freebsd-zfs partitions>
 
Nope. I have renamed one of them. Now they are called zroot1 and zroot3 ; I have also reguid one of them. But I see that the reguid command didn't work.
Reguid always works. It works on the imported pools, not on these, which are not imported. Just did it 2 times to show you:

Code:
root@Testsystem ~# zpool status
  pool: ssd_sys
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        ssd_sys     ONLINE       0     0     0
          ada0p3    ONLINE       0     0     0
        cache
          ada1p3    ONLINE       0     0     0

errors: No known data errors
root@Testsystem ~# zpool reguid ssd_sys
root@Testsystem ~# zdb -l /dev/ada0p3
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'ssd_sys'
    state: 0
    txg: 131698
    pool_guid: 1558359337094743874
    errata: 0
    hostname: 'Testsystem'
    top_guid: 14229285537606935254
    guid: 14229285537606935254
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 14229285537606935254
        path: '/dev/ada0p3'
        whole_disk: 1
        metaslab_array: 67
        metaslab_shift: 31
        ashift: 12
        asize: 241254793216
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
    labels = 0 1 2 3
root@Testsystem ~# zpool reguid ssd_sys
root@Testsystem ~# zdb -l /dev/ada0p3
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'ssd_sys'
    state: 0
    txg: 131700
    pool_guid: 16180463519892370449
    errata: 0
    hostname: 'Testsystem'
    top_guid: 14229285537606935254
    guid: 14229285537606935254
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 14229285537606935254
        path: '/dev/ada0p3'
        whole_disk: 1
        metaslab_array: 67
        metaslab_shift: 31
        ashift: 12
        asize: 241254793216
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
    labels = 0 1 2 3
 
While I'm on Linux (where I've installed ZFS like in FreeBSD),all my ZFS pools can cohexist). Now I've booted FreeBSD and from this point of view zroot3 and zroot1 can't cohexist,to be able to boot FreeBSD I've been forced to detach the USB disk (that I have reattached later,when FreeBSD has fully booted from the ada0 disk). So :

Code:
# zpool import

   pool: zroot1
     id: 15697395870475046810
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        zroot1                                        ONLINE
          gptid/ee846d91-92b6-11ee-8772-e0d55ee21f22  ONLINE

   pool: zroot2
     id: 17629264177669490151
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot2                                        ONLINE
          gptid/4f4c8af2-2ec0-11ed-8ff9-e0d55ee21f22  ONLINE

# gpart show

=>       40  976773095  ada0  GPT  (466G)
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528    4194304     3  freebsd-swap  (2.0G)
    4728832  972044288     4  freebsd-zfs  (464G)
  976773120         15        - free -  (7.5K)

=>        40  1953525095  da2  GPT  (932G)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832  1948794880    4  freebsd-zfs  (929G)
  1953523712        1423       - free -  (712K)

=>        40  1953525095  diskid/DISK-20130506005976F  GPT  (932G)
          40      532480                            1  efi  (260M)
      532520        1024                            2  freebsd-boot  (512K)
      533544         984                               - free -  (492K)
      534528     4194304                            3  freebsd-swap  (2.0G)
     4728832  1948794880                            4  freebsd-zfs  (929G)
  1953523712        1423                               - free -  (712K)

=>        40  3907029095  da6  GPT  (1.8T)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832   972044288    4  freebsd-zfs  (464G)
   976773120  2930255872    5  ms-basic-data  (1.4T)
  3907028992         143       - free -  (72K)

=>        63  2930255809  da6p5  MBR  (1.4T)
          63  2930255809         - free -  (1.4T)

=>        40  3907029095  diskid/DISK-2015020204055E  GPT  (1.8T)
          40      532480                           1  efi  (260M)
      532520        1024                           2  freebsd-boot  (512K)
      533544         984                              - free -  (492K)
      534528     4194304                           3  freebsd-swap  (2.0G)
     4728832   972044288                           4  freebsd-zfs  (464G)
   976773120  2930255872                           5  ms-basic-data  (1.4T)
  3907028992         143                              - free -  (72K)

=>        63  2930255809  gpt/exfat-dati  MBR  (1.4T)
          63  2930255809                  - free -  (1.4T)

=>        63  2930255809  gptid/3c7f06db-442b-4e95-9f56-b36c7dfa1281  MBR  (1.4T)
          63  2930255809                                              - free -  (1.4T)

=>        63  2930255809  diskid/DISK-2015020204055Ep5  MBR  (1.4T)
          63  2930255809                                - free -  (1.4T)

#  geom disk list

Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 500107862016 (466G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e4
   descr: CT500MX500SSD4
   lunid: 500a0751e20b2ae5
   ident: 1924E20B2AE5
   rotationrate: 0
   fwsectors: 63
   fwheads: 16

Geom name: da2
Providers:
1. Name: da2
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Mode: r0w0e0
   descr: TOSHIBA External USB 3.0
   lunid: 41736d6564696120
   ident: 20130506005976F
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

Geom name: da6
Providers:
1. Name: da6
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: TOSHIBA External USB 3.0
   lunid: 5000000000000001
   ident: 2015020204055E
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

# zdb -l /dev/da2p4
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'zroot2'
    state: 0
    txg: 2277293
    pool_guid: 17629264177669490151
    errata: 0
    hostid: 2866736267
    hostname: 'marietto'
    top_guid: 9893295991557613629
    guid: 9893295991557613629
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 9893295991557613629
        path: '/dev/gptid/4f4c8af2-2ec0-11ed-8ff9-e0d55ee21f22'
        whole_disk: 1
        metaslab_array: 256
        metaslab_shift: 33
        ashift: 12
        asize: 997778259968
        is_log: 0
        DTL: 216
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3

# zdb -l /dev/da6p4
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'zroot1'
    state: 0
    txg: 110489
    pool_guid: 15697395870475046810
    errata: 0
    hostid: 2425734838
    hostname: 'Z390-AORUS-PRO-DEST'
    top_guid: 8357560681389834947
    guid: 8357560681389834947
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 8357560681389834947
        path: '/dev/sdh4'
        phys_path: 'id1,enc@n3061686369656d30/type@0/slot@2/elmdesc@Slot_01/p4'
        whole_disk: 1
        metaslab_array: 256
        metaslab_shift: 32
        ashift: 12
        asize: 497681956864
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3

# zdb -l /dev/ada0p4
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'zroot3'
    state: 0
    txg: 385602
    pool_guid: 7607196024616605116
    errata: 0
    hostname: 'marietto'
    top_guid: 8357560681389834947
    guid: 8357560681389834947
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 8357560681389834947
        path: '/dev/gpt/zfs0'
        phys_path: 'id1,enc@n3061686369656d30/type@0/slot@2/elmdesc@Slot_01/p4'
        whole_disk: 1
        metaslab_array: 256
        metaslab_shift: 32
        ashift: 12
        asize: 497681956864
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3

# zpool status

  pool: zroot3
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
config:

        NAME        STATE     READ WRITE CKSUM
        zroot3      ONLINE       0     0     0
          gpt/zfs0  ONLINE       0     0     0

errors: No known data errors
 
Please read well what I wrote : "While I'm on Linux..." ; but I wrote the latest message while I was on FreeBSD.
 
Please read well what I wrote : "While I'm on Linux..." ; but I wrote the latest message while I was on FreeBSD.
zdb -l should show the path of disk which the pool was last imported, using the info recorded in the pool itself.
So if the pool is last imported (sanely) on Linux and cannot import on FreeBSD, it should be shown how Linux recognized it. On the other hand, pools that last sanely imported (including currently imported) on FreeBSD, it should be shown how FreeBSD recognized it.
 
If I have understood well,I should boot a third installation of FreeBSD,better if it is installed on a UFS disk and from there I should import both the zpools , zroot3 and zroot1 ? Has been my mistake that of using Linux instead of FreeBSD to import the zpool that I wanted to rename and reguid ?
 
What's confusing me is that even currently zroot1 and zroot3 have different pool_guid (should be thanks to zpool reguid), underlying disk guids (and top_guid) are still the same (8357560681389834947).
Not sure, but this can causing collision when attempting to import.
Someone having much deeper knowledge about implementation of OpenZFS is needed.
Argentum, any thoughts?
 
If I have understood well,I should boot a third installation of FreeBSD,better if it is installed on a UFS disk and from there I should import both the zpools , zroot3 and zroot1 ? Has been my mistake that of using Linux instead of FreeBSD to import the zpool that I wanted to rename and reguid ?

We do not know what did you do before and also what do you want to do. If you made a block-copy of an entire disk, you probably also blindly duplicated the partition table. That means asking for trouble. It is never a good idea to block-copy an entire disks (unless you have a special reason), especially if you want to use these disks later in the same machine. GPT stands for GUID(based) Partition Table. GUID means Globally Unique Identifier and there is a good reason to keep it unique. These GUID-s are used for purpose in design. Collisions or reuse of the same GUID is always problematic.

If you need a copy of your ZFS pool, there are good ways to do it. For example, using zfs send (zfs-send(8)) or mirroring the original. Also, always try to avoid duplicating the same GPT partition table. In that case you will have partitions on different disks with the same GUID. Use gpart create and gpart add to create partitions and read the manual gpart(8).

https://en.wikipedia.org/wiki/GUID_Partition_Table

Making a block copy and trying to fix it later, can be a problem, as we can see here.
 
Ok. I'm following another route. I have installed FreeBSD 13.2 from scratch on the USB disk where before I had dd'ed the image of an old FreeBSD 13.2 backup. Now I'm copying all the files of this fresh installation to the directory called "132-ZFS-fresh" on the same disk. The next step has been to load my old FreeBSD_132-zfs-new.img backup into the memory using mdconfig with this command :

Code:
# mdconfig -a -t vnode -f FreeBSD_132-zfs-new.img -u 0

here it is :

=>       40  976773088  md0  GPT  (466G)
         40     532480    1  efi  (260M)
     532520       1024    2  freebsd-boot  (512K)
     533544        984       - free -  (492K)
     534528    4194304    3  freebsd-swap  (2.0G)
    4728832  972044288    4  freebsd-zfs  (464G)
  976773120          8       - free -  (4.0K)

now,I want to import the zpool that's inside the partition /dev/md0p4,that's called zroot. The problem is that it is not seen by the system :

Code:
# zpool import

   pool: zroot2
     id: 17629264177669490151
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot2                                        ONLINE
          gptid/4f4c8af2-2ec0-11ed-8ff9-e0d55ee21f22  ONLINE

but it is there :

Code:
# zdb -l /dev/md0p4

------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'zroot'
    state: 0
    txg: 97705
    pool_guid: 7607196024616605116
    errata: 0
    hostname: ''
    top_guid: 8357560681389834947
    guid: 8357560681389834947
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 8357560681389834947
        path: '/dev/ada0p4'
        phys_path: 'id1,enc@n3061686369656d30/type@0/slot@2/elmdesc@Slot_01/p4'
        whole_disk: 1
        metaslab_array: 256
        metaslab_shift: 32
        ashift: 12
        asize: 497681956864
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3

what I want to do is to copy every file that's inside this zpool on the new zpool that I have created using the FreeBSD installer,called zroot-132,this :

Code:
# zpool status

  pool: zroot-132
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
config:

        NAME        STATE     READ WRITE CKSUM
        zroot-132   ONLINE       0     0     0
          da5p4     ONLINE       0     0     0

errors: No known data errors

what's the problem now ? Why I'm not able to import it ?
 
Back
Top