ZFS Can't rename a ZFS pool because it was previously in use from another system...

Hello to everyone.

I would like to rename the zpool called zroot in zroot3 , for example. Usually I do this using the command : zfs import zroot-old zroot-new,but this time it didn't work :

Code:
# zfs import

  pool: zroot
     id: 7607196024616605116
  state: ONLINE
status: Some supported features are not enabled on the pool.
    (Note that they may be intentionally disabled if the
    'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
    some features will not be available without an explicit 'zpool upgrade'.
 config:

    zroot       ONLINE
      gpt/zfs0  ONLINE

# zpool import zroot zroot3

cannot import 'zroot': pool was previously in use from another system.
The pool can be imported, use 'zpool import -f' to import the pool.
 
You might try the zdb command to display the label metadata for each device to see if there's a conflict between multiple pools named zroot.

zdb -l /dev/gpt/zfs0
 
Hello to everyone.

I would like to rename the zpool called zroot in zroot3 , for example. Usually I do this using the command : zfs import zroot-old zroot-new,but this time it didn't work :

Code:
# zfs import

  pool: zroot
     id: 7607196024616605116
  state: ONLINE
status: Some supported features are not enabled on the pool.
    (Note that they may be intentionally disabled if the
    'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
    some features will not be available without an explicit 'zpool upgrade'.
 config:

    zroot       ONLINE
      gpt/zfs0  ONLINE

# zpool import zroot zroot3

cannot import 'zroot': pool was previously in use from another system.
The pool can be imported, use 'zpool import -f' to import the pool.
Quoting the error message above: "The pool can be imported, use 'zpool import -f' to import the pool."
 
Code:
# zpool import zroot zroot3

cannot import 'zroot': pool was previously in use from another system.
The pool can be imported, use 'zpool import -f' to import the pool.
I have done that some time ago. I think this is just a precaution and you can use zpool import -f if you know what you are doing.
 
I think the error is in what I've highlighted :

Code:
root@marietto:/mnt/zroot2 # zpool import -f -R /mnt/zroot2 zroot2

root@marietto:/etc # zfs list

NAME                  USED  AVAIL  REFER  MOUNTPOINT

zroot2                355G   544G   355G  /mnt/zroot2/zroot2
zroot2/ROOT           288K   544G    96K  none
zroot2/ROOT/default   192K   544G   192K  /mnt/zroot2
zroot2/tmp            148K   544G   148K  /mnt/zroot2/tmp
zroot2/usr            400K   544G   112K  /mnt/zroot2/usr
zroot2/usr/home        96K   544G    96K  /mnt/zroot2/usr/home
zroot2/usr/ports       96K   544G    96K  /mnt/zroot2/usr/ports
zroot2/usr/src         96K   544G    96K  /mnt/zroot2/usr/src
zroot2/var           3.75M   544G   136K  /mnt/zroot2/var
zroot2/var/audit       96K   544G    96K  /mnt/zroot2/var/audit
zroot2/var/crash     2.74M   544G  2.74M  /mnt/zroot2/var/crash
zroot2/var/log        588K   544G   588K  /mnt/zroot2/var/log
zroot2/var/mail        96K   544G    96K  /mnt/zroot2/var/mail
zroot2/var/tmp        120K   544G   120K  /mnt/zroot2/var/tmp

root@marietto:/mnt/zroot3 # zpool import -f -R /mnt/zroot3 zroot3

zroot3 330G 116G 96K /mnt/zroot3/zroot (shouldn't it be zroot3 ?)
zroot3/ROOT 71.9G 116G 96K none
zroot3/ROOT/default 71.9G 116G 71.9G /mnt/zroot3
zroot3/tmp 236M 116G 236M /mnt/zroot3/tmp
zroot3/usr 201G 116G 114G /mnt/zroot3/usr
zroot3/usr/home 65.0G 116G 65.0G /mnt/zroot3/usr/home
zroot3/usr/ports 18.7G 116G 18.7G /mnt/zroot3/usr/ports
zroot3/usr/src 3.19G 116G 3.19G /mnt/zroot3/usr/src
zroot3/var 56.5G 116G 54.1G /mnt/zroot3/var
zroot3/var/audit 96K 116G 96K /mnt/zroot3/var/audit
zroot3/var/crash 1.11G 116G 1.11G /mnt/zroot3/var/crash
zroot3/var/log 4.73M 116G 4.73M /mnt/zroot3/var/log
zroot3/var/mail 1.33G 116G 1.33G /mnt/zroot3/var/mail
zroot3/var/tmp 18.1M 116G 18.1M /mnt/zroot3/var/tmp

Code:
root@marietto:/mnt/zroot3 # zfs list

NAME                  USED  AVAIL  REFER  MOUNTPOINT
zroot2                355G   544G   355G  /mnt/zroot2/zroot2
zroot2/ROOT           288K   544G    96K  none
zroot2/ROOT/default   192K   544G   192K  /mnt/zroot2
zroot2/tmp            148K   544G   148K  /mnt/zroot2/tmp
zroot2/usr            400K   544G   112K  /mnt/zroot2/usr
zroot2/usr/home        96K   544G    96K  /mnt/zroot2/usr/home
zroot2/usr/ports       96K   544G    96K  /mnt/zroot2/usr/ports
zroot2/usr/src         96K   544G    96K  /mnt/zroot2/usr/src
zroot2/var           3.75M   544G   136K  /mnt/zroot2/var
zroot2/var/audit       96K   544G    96K  /mnt/zroot2/var/audit
zroot2/var/crash     2.74M   544G  2.74M  /mnt/zroot2/var/crash
zroot2/var/log        588K   544G   588K  /mnt/zroot2/var/log
zroot2/var/mail        96K   544G    96K  /mnt/zroot2/var/mail
zroot2/var/tmp        120K   544G   120K  /mnt/zroot2/var/tmp
zroot3                330G   116G    96K  /mnt/zroot3/zroot
zroot3/ROOT          71.9G   116G    96K  none
zroot3/ROOT/default  71.9G   116G  71.9G  /mnt/zroot3
zroot3/tmp            236M   116G   236M  /mnt/zroot3/tmp
zroot3/usr            201G   116G   114G  /mnt/zroot3/usr
zroot3/usr/home      65.0G   116G  65.0G  /mnt/zroot3/usr/home
zroot3/usr/ports     18.7G   116G  18.7G  /mnt/zroot3/usr/ports
zroot3/usr/src       3.19G   116G  3.19G  /mnt/zroot3/usr/src
zroot3/var           56.5G   116G  54.1G  /mnt/zroot3/var
zroot3/var/audit       96K   116G    96K  /mnt/zroot3/var/audit
zroot3/var/crash     1.11G   116G  1.11G  /mnt/zroot3/var/crash
zroot3/var/log       4.73M   116G  4.73M  /mnt/zroot3/var/log
zroot3/var/mail      1.33G   116G  1.33G  /mnt/zroot3/var/mail
zroot3/var/tmp       18.1M   116G  18.1M  /mnt/zroot3/var/tmp

I don't see the pool zroot,that should be this disk :

=>        40  3907029095  da5  GPT  (1.8T)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832   972044288    4  freebsd-zfs  (464G)
   976773120  2930256015       - free -  (1.4T)

what do you think ? how can I fix it ?
 
Code:
root@marietto:/mnt/zroot3 # zdb -l /dev/gpt/zfs0

------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'zroot3'
    state: 0
    txg: 350412
    pool_guid: 7607196024616605116
    errata: 0
    hostid: 717846768
    hostname: 'marietto'
    top_guid: 8357560681389834947
    guid: 8357560681389834947
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 8357560681389834947
        path: '/dev/gpt/zfs0'
        phys_path: 'id1,enc@n3061686369656d30/type@0/slot@2/elmdesc@Slot_01/p4'
        whole_disk: 1
        metaslab_array: 256
        metaslab_shift: 32
        ashift: 12
        asize: 497681956864
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3
 
root@marietto:/mnt/zroot3 # zpool import -f -R /mnt/zroot3 zroot3

zroot3 330G 116G 96K /mnt/zroot3/zroot (shouldn't it be zroot3 ?)
zroot3/ROOT 71.9G 116G 96K none
zroot3/ROOT/default 71.9G 116G 71.9G /mnt/zroot3
zroot3/tmp 236M 116G 236M /mnt/zroot3/tmp
zroot3/usr 201G 116G 114G /mnt/zroot3/usr

what do you think ? how can I fix it ?

I think this is normal. Only the pool name is changed, but not the mount points. Also, you can import without mounting.

zpool-import(8) There is -N -- Import the pool without mounting any file systems.
 
Something is broken,because I have 3 zpools,but I see only two. My zfs disks are the following :

Code:
=>       40  976773095  ada0  GPT  (466G)
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528    4194304     3  freebsd-swap  (2.0G)
    4728832  972044288     4  freebsd-zfs  (464G)
  976773120         15        - free -  (7.5K)

=>        40  1953525095  da1  GPT  (932G)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832  1948794880    4  freebsd-zfs  (929G)
  1953523712        1423       - free -  (712K)

=>        40  3907029095  da5  GPT  (1.8T)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832   972044288    4  freebsd-zfs  (464G)
   976773120  2930256015       - free -  (1.4T)
 
Code:
root@marietto:/mnt/zroot3 # zpool status
  pool: zroot2
 state: ONLINE
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 18:01:37 with 0 errors on Sun Oct  8 18:25:41 2023
config:

    NAME                                          STATE     READ WRITE CKSUM
    zroot2                                        ONLINE       0     0     0
      gptid/4f4c8af2-2ec0-11ed-8ff9-e0d55ee21f22  ONLINE       0     0     0

errors: 4 data errors, use '-v' for a list

  pool: zroot3
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
    The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
config:

    NAME        STATE     READ WRITE CKSUM
    zroot3      ONLINE       0     0     0
      gpt/zfs0  ONLINE       0     0     0

errors: No known data errors
 
Code:
root@marietto:/mnt/zroot3 # zpool status
  pool: zroot2
 state: ONLINE
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 18:01:37 with 0 errors on Sun Oct  8 18:25:41 2023
config:

    NAME                                          STATE     READ WRITE CKSUM
    zroot2                                        ONLINE       0     0     0
      gptid/4f4c8af2-2ec0-11ed-8ff9-e0d55ee21f22  ONLINE       0     0     0

errors: 4 data errors, use '-v' for a list

  pool: zroot3
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
    The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
config:

    NAME        STATE     READ WRITE CKSUM
    zroot3      ONLINE       0     0     0
      gpt/zfs0  ONLINE       0     0     0

errors: No known data errors
And zpool import ?
 
Code:
root@marietto:/home/marietto/Desktop # zpool import

   pool: zroot2
     id: 17629264177669490151
  state: ONLINE
status: Some supported features are not enabled on the pool.
    (Note that they may be intentionally disabled if the 'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
    some features will not be available without an explicit 'zpool upgrade'.
 config:

    zroot2                                        ONLINE
      gptid/4f4c8af2-2ec0-11ed-8ff9-e0d55ee21f22  ONLINE

   pool: zroot3
     id: 7607196024616605116
  state: ONLINE
status: Some supported features are not enabled on the pool.
    (Note that they may be intentionally disabled if the 'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
    some features will not be available without an explicit 'zpool upgrade'.
 config:

    zroot3      ONLINE
      gpt/zfs0  ONLINE
 
This is the suggestion that I've got :

The fault is that you are using da and ada instead of the unique disk IDs when importing the pools. Consult man zpool import -d to see your options.... And please add redundancy to your vdevs. A stripe of 1 is just asking for trouble.

I have no idea about what to do. ZFS is my black beast. Can someone elaborate more to help me ?
 
Would be too late, but was the pool (zroot) exported sanely at the first place?
I myself never tried to rename pool, but to do something like it, the pool should be EXPORTED SANELY, WITHOUT ERROR.
I suspect it was unsane, as zpool import (the first attempt) adviced to use -f option.

And creating pools with geom providers like /dev/ada0 is basically discouraged. In some cases, recognition of devices are racy, especially exactly same type of multiple controllers exists and each ports of them are filled with drives attached. sometimes a very same drive could recognized as ada0, but next reboot, at worst, recognized as ada2, assuming 2 driver ara attached to each single controller in the system. In early days, I struggled with the kind of problem (not ada but ad ATM, though). Same happened for other OS'es, too on the same hardware.

So I habitally label partitions and use them instead of using geom provider like /dev/nda0p3 directly. The label is unique all over the partitions I'm managing.
 
The pool that was broken (but that has been fixed),is not the pool that I'm having troubles to import. Are the other pools that can't be imported together because they have something that goes in conflict. So,now,can you help me to troubleshoot the situation to find the problem and fix it ? Actually I need to import both the zroot* pools,thanks.
 
If the conflict is about mount points, -R option on import could be help.
For example, if nothing is mounted under /mnt, zpool import -R /mnt -f ztest would forcibly import pool named ztest and mount its root to /mnt. See zpool-import(8) for details.
 
I can't mount one of the zroot* pool. For example now I've booted FreeBSD that's on ada0 disk (zroot3) and this is what I'm able to see :

Code:
root@marietto:/usr/home/marietto/Desktop # zpool list

NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot3   460G   330G   130G        -         -    27%    71%  1.00x    ONLINE  -

root@marietto:/usr/home/marietto/Desktop # zpool import

   pool: zroot2
     id: 17629264177669490151
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        zroot2                                        ONLINE
          gptid/4f4c8af2-2ec0-11ed-8ff9-e0d55ee21f22  ONLINE

I would like to import zroot,but I don't see it when I do "zpool import". If I boot FreeBSD that's on da5 / zpool,probably I will not see the pool zroot3.
 
If you are sure the pool to be imported (assume it's "zroot") is not in use for other system, and your /mnt is currently empty (/mnt can be changed to whatever empty directory you want), can you import zroot with zpool import -R /mnt -f zroot?
Possibly zpool import alone ignores pools that contains mountpoints conflicting currently imported pool.
 
Will that overwrite the old metadata and fix it for future imports?
For that machine yes. But if you want to import it back to the other machine you will need to do a forced import again.

This happens all the time on my sandbox machine. You can avoid this hassle by copying /etc/zfs/zpool.cache from the last machine the pool was imported to the next.
 
If you are sure the pool to be imported (assume it's "zroot") is not in use for other system, and your /mnt is currently empty (/mnt can be changed to whatever empty directory you want), can you import zroot with zpool import -R /mnt -f zroot?
Possibly zpool import alone ignores pools that contains mountpoints conflicting currently imported pool.

I can't import zroot,it does not exist (but it does).
 
For that machine yes. But if you want to import it back to the other machine you will need to do a forced import again.

This happens all the time on my sandbox machine. You can avoid this hassle by copying /etc/zfs/zpool.cache from the last machine the pool was imported to the next.

mmm ok. but what happens to the next machine,if I exchange the two caches ? I imagine that I can import zroot but I can't import zroot3 anymore ? that's because the cache file that's stored on the new machine has been overwritten by the older one ?
 
If the other machine has imported other zpools, those will need to be reimported. Basically you're replacing the machine's uuid in the zpool.cache file with the other machine's uuid.

This worked for me because the different O/S's were different partitions on the same machine. Because the machine had three copies of -CURRENT and FreeBSD 10 through 13 on it, amd64 and i386. It's a sandbox machine. In that case I dare not zpool upgrade or older versions of FreeBSD could never use the pool. But I digress.

But, if you are moving a disk from one machine to another and back again, it's safer to do the forced import every time. It's like warning you to say, "are you sure you want to import this pool?"
 
Back
Top