Solved Unable to create new ZFS pool with NVMe SSD

D

Deleted member 65953

Guest
I initially had FreeBSD 13 on a 512 GB NVMe SSD (ZFS root and GELI encryption configured via the installer). Later, I bought two 8 TB hard disk drives. I reinstalled FreeBSD 13 but this time on the two 8 TB HDDs instead of the NVMe SSD. In the installer, I chose ZFS root on the two HDDs (2-disk mirror) with encryption. I basically ignored the existence of the NVMe SSD during the installation process. I plan to repurpose the SSD to store the contents of /usr/ports/, /usr/src/, and /usr/obj/. Now, I want to add a new ZFS pool for the NVMe SSD, but I am having difficulties:

Code:
# zpool create ssdpool /dev/nvd0
cannot create 'ssdpool': no such pool or dataset

I also tried:

Code:
# zpool create ssdpool nvd0p3.eli
invalid vdev specification
use '-f' to override the following errors:
/dev/nvd0p3.eli is part of potentially active pool 'zroot'

What am I doing wrong?

Here is some information about my computer:

Code:
# freebsd-version
13.2-RELEASE

Code:
# zpool status
  pool: zroot
 state: ONLINE
config:

    NAME            STATE     READ WRITE CKSUM
    zroot           ONLINE       0     0     0
      mirror-0      ONLINE       0     0     0
        ada0p3.eli  ONLINE       0     0     0
        ada1p3.eli  ONLINE       0     0     0

errors: No known data errors

Code:
# zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
zroot               1.74G  7.14T       96K  /zroot
zroot/ROOT          1003M  7.14T       96K  none
zroot/ROOT/default  1003M  7.14T     1003M  /
zroot/tmp            104K  7.14T      104K  /tmp
zroot/usr            774M  7.14T       96K  /usr
zroot/usr/home       128K  7.14T      128K  /usr/home
zroot/usr/ports       96K  7.14T       96K  /usr/ports
zroot/usr/src        773M  7.14T      773M  /usr/src
zroot/var            672K  7.14T       96K  /var
zroot/var/audit       96K  7.14T       96K  /var/audit
zroot/var/crash       96K  7.14T       96K  /var/crash
zroot/var/log        192K  7.14T      192K  /var/log
zroot/var/mail        96K  7.14T       96K  /var/mail
zroot/var/tmp         96K  7.14T       96K  /var/tmp

Code:
# gpart show
=>        40  1000215136  nvd0  GPT  (477G)
          40      532480     1  efi  (260M)
      532520        1024     2  freebsd-boot  (512K)
      533544         984        - free -  (492K)
      534528   999680000     3  freebsd-zfs  (477G)
  1000214528         648        - free -  (324K)

=>         40  15628053088  ada0  GPT  (7.3T)
           40       532480     1  efi  (260M)
       532520         1024     2  freebsd-boot  (512K)
       533544          984        - free -  (492K)
       534528  15627517952     3  freebsd-zfs  (7.3T)
  15628052480          648        - free -  (324K)

=>         40  15628053088  ada1  GPT  (7.3T)
           40       532480     1  efi  (260M)
       532520         1024     2  freebsd-boot  (512K)
       533544          984        - free -  (492K)
       534528  15627517952     3  freebsd-zfs  (7.3T)
  15628052480          648        - free -  (324K)

I'd be happy to provide additional information if required.
 
There's an existing pool on nvd0p3.eli. You would need to destroy that one first.
Where did that pool come from? Is it a leftover from my previous FreeBSD install?
 
Where did that pool come from? Was it a leftover from my previous FreeBSD install?
Probably, according to the error message it's called 'zroot', which is the default pool name of the installer.
 
What does zpool import tell you?
Code:
# zpool import
   pool: zroot
     id: 7438682952389206354
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

    zroot         ONLINE
      nvd0p3.eli  ONLINE
 
Yeah, that's what I suspected. Don't do zpool destroy zroot though. Your current pool (on ada0 and ada1) is also called 'zroot'. So you would destroy that one. Try using the ID instead; zpool destroy 7438682952389206354

Or just completely wipe the whole drive, so any trace of the old data is removed.
 
Try using the ID instead; zpool destroy 7438682952389206354
That did not work ...

Code:
# zpool destroy 7438682952389206354
cannot open '7438682952389206354': name must begin with a letter
 
Why not gpart destroy -F /dev/nvd0 first? You've got multiple partitions on it (gpart show) so I think GEOM layer is "doing things" so you need to get rid of the partitions first before you use it as a vdev
 
Back
Top