ZFS Funny diskids after importing ZOL pool

Hi all,

I've been using ZFS on Linux, and migrated to FreeBSD 10.1 and re-imported the pool. However for some disks it gives funny disk IDs.

Code:
  pool: tank
state: ONLINE
  scan: none requested
config:

        NAME                                              STATE     READ WRITE CKSUM
        tank                                              ONLINE       0     0     0
          raidz1-0                                        ONLINE       0     0     0
            diskid/DISK-%20%20%20%20%20WD-WMC4E0026439p1  ONLINE       0     0     0
            diskid/DISK-%20%20%20%20%20WD-WMC4E0029581p1  ONLINE       0     0     0
            diskid/DISK-%20%20%20%20%20WD-WMC4E0026515p1  ONLINE       0     0     0
          raidz1-1                                        ONLINE       0     0     0
            diskid/DISK-%20%20%20%20%20WD-WMC4E0026618p1  ONLINE       0     0     0
            diskid/DISK-WD-WMC4E0026632p1                 ONLINE       0     0     0
            diskid/DISK-WD-WCC4E0602902p1                 ONLINE       0     0     0
          raidz1-2                                        ONLINE       0     0     0
            diskid/DISK-WD-WCC4E0606176p1                 ONLINE       0     0     0
            diskid/DISK-WD-WMC4E0078455p1                 ONLINE       0     0     0
            diskid/DISK-WD-WCC4E1419393p1                 ONLINE       0     0     0
        spares
          diskid/DISK-WD-WMC4E0029604p1                   AVAIL

errors: No known data errors

This is purely cosmetic though but is there any way to remove the %20%20 stuff?

Thanks!
 
Yeah it's weird that only certain disks have this. It's not a big problem though, it's only annoying when you print out iostats that it looks like this:

Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
tank                                    14.9T  17.8T     33     21  3.98M  1.58M
  raidz1                                4.96T  5.92T     11      7  1.33M   542K
    diskid/DISK-%20%20%20%20%20WD-WMC4E0026439p1      -      -      5      3   449K   274K
    diskid/DISK-%20%20%20%20%20WD-WMC4E0029581p1      -      -      5      3   450K   274K
    diskid/DISK-%20%20%20%20%20WD-WMC4E0026515p1      -      -      5      3   461K   274K
  raidz1                                4.95T  5.93T     11      7  1.33M   540K
    diskid/DISK-%20%20%20%20%20WD-WMC4E0026618p1      -      -      5      3   453K   273K
    diskid/DISK-WD-WMC4E0026632p1           -      -      5      4   459K   273K
    diskid/DISK-WD-WCC4E0602902p1           -      -      5      4   445K   273K
  raidz1                                4.95T  5.93T     11      7  1.32M   539K
    diskid/DISK-WD-WCC4E0606176p1           -      -      5      4   454K   272K
    diskid/DISK-WD-WMC4E0078455p1           -      -      5      4   451K   272K
    diskid/DISK-WD-WCC4E1419393p1           -      -      5      4   451K   272K
--------------------------------------  -----  -----  -----  -----  -----  -----
zroot                                   4.86G   231G      0     43  5.72K   538K
  mirror                                4.86G   231G      0     43  5.72K   538K
    gpt/zfs0                                -      -      0     15  2.81K   541K
    gpt/zfs1                                -      -      0     15  2.96K   541K
--------------------------------------  -----  -----  -----  -----  -----  -----
 
I wanted the output from zpool status and iostat to look pretty, with human-intelligible disk names. The way I accomplished this is: Used gpart to give names to each partition (which are now visible in gpart show -l), then exported all the ZFS file systems, and re-imported them from the nice partition names. And then the naming convention is that in mirror pairs, I identify each disk by its manufacturer (my mirror pair consists of one Seagate and one Hitachi, with the backup on a WD):
Code:
# gpart show -l ada2
=>        34  5860533101  ada2  GPT  (2.7T)
          34           6        - free -  (3.0k)
          40  1953525096     1  data_hds_home_1  (931G)
  1953525136  3907007992     2  data_hds_saveroots  (1.8T)
  5860533128           7        - free -  (3.5k)

# zpool iostat -v
                             capacity     operations    bandwidth
pool                      alloc   free   read  write   read  write
------------------------  -----  -----  -----  -----  -----  -----
backup                     363G  2.37T     38      0  4.62M  6.01K
  gpt/wd_e_backup          363G  2.37T     38      0  4.62M  6.01K
------------------------  -----  -----  -----  -----  -----  -----
home                       513G   359G     61      1  6.40M  4.45K
  mirror                   513G   359G     61      1  6.40M  4.45K
    gpt/data_st_home_2        -      -     53      0  6.51M  4.51K
    gpt/data_hds_home_1       -      -     54      0  6.51M  4.51K
------------------------  -----  -----  -----  -----  -----  -----
saveroots                 1.74T  78.7G    185      3  22.4M  27.2K
  gpt/data_hds_saveroots  1.74T  78.7G    185      3  22.4M  27.2K
------------------------  -----  -----  -----  -----  -----  -----
(there is also an SSD from which I boot, but it doesn't use ZFS yet, and it's partitions are named boot_ssd_root, boot_ssd_var, boot_ssd_usr and so on).

This was very easy: for each of the ZFS file systems, say zpool export backup, followed by zpool import -d /dev/gpt/wd_e_backup, and so on. Warning: for the /home file system, this had to be done in single-user mode, since unmounting /home is rude.
 
Back
Top