Solved zfs: cannot create 'storage': no such pool or dataset

Hi all,

I have no experience with zfs and am trying to deploy it for the first time. I have the following Handbook page open in front of me, of course: 19.2. Quick Start Guide

I have a box with four HDDs and one SSD. The system boots FreeBSD v11.2-RELEASE-p4 from the SDD, ada4:
Code:
/dev/ada0
/dev/ada0s1
/dev/ada0s2
/dev/ada0s3
/dev/ada1
/dev/ada1s1
/dev/ada1s1d
/dev/ada2
/dev/ada3
/dev/ada4
/dev/ada4p1
/dev/ada4p2
/dev/ada4p3
I am trying to create a zpool with the four HDDs
# zpool create storage raidz /dev/ada0 /dev/ada1 /dev/ada2 /dev/ada3
cannot create 'storage': no such pool or dataset

What would be going wrong here?

Thanks in advance for your patience.
 
Last edited by a moderator:
I see no direct reason why this wouldn't work, other than the HD's seemingly being in use (they contain partitions). You might be able to use -f see also the zpool(8) manualpage.

Other than that: did you customize your system in any way? What happens if you run # zpool import does that show anything?
 
#zspool import doesn't return anything.

As far I know the system is pretty standard. It's a HP Proliant N40L microserver, which I picked up second-hand. It had two 2TB drives. I added two more that I scavenged from a Linux box I no longer need. I also added an SSD drive, and installed FreeBSD on that. The only unusual thing I see is a raid device (hardware?), which I'm not using and shouldn't interfere with zpool, no?

# gpart show
=> 34 234441581 ada4 GPT (112G)
34 128 1 freebsd-boot (64K)
162 224395136 2 freebsd-ufs (107G)
224395298 8388608 3 freebsd-swap (4.0G)
232783906 1657709 - free - (809M)

=> 63 3906897985 raid/r0 MBR (1.8T)
63 1985 - free - (993K)
2048 716800 1 ntfs [active] (350M)
718848 3906177024 2 ntfs (1.8T)
3906895872 2176 - free - (1.1M)

=> 63 3907029105 ada0 MBR (1.8T)
63 1985 - free - (993K)
2048 40960000 1 linux-data [active] (20G)
40962048 102400000 2 linux-data (49G)
143362048 3763666944 3 linux-data (1.8T)
3907028992 176 - free - (88K)

=> 63 3907029105 ada1 MBR (1.8T)
63 3907029105 1 freebsd [active] (1.8T)

=> 63 3907029105 diskid/DISK-Z1E132YK MBR (1.8T)
63 1985 - free - (993K)
2048 40960000 1 linux-data [active] (20G)
40962048 102400000 2 linux-data (49G)
143362048 3763666944 3 linux-data (1.8T)
3907028992 176 - free - (88K)

=> 0 3907029105 ada1s1 BSD (1.8T)
0 3907029105 4 freebsd-ufs (1.8T)

=> 63 3907029105 diskid/DISK-S2H7JD1Z901480 MBR (1.8T)
63 3907029105 1 freebsd [active] (1.8T)

=> 0 3907029105 ufsid/4ccac74bc29dce7e BSD (1.8T)
0 3907029105 4 freebsd-ufs (1.8T)

=> 0 3907029105 diskid/DISK-S2H7JD1Z901480s1 BSD (1.8T)
0 3907029105 4 freebsd-ufs (1.8T)
 
Ok, if I run zpool create storage raidz ada0 ada1 this works.

These are the two old Linux drives.

ada2 and ada3 are the two drives that came with the system, and AFAICT have not been used before. Maybe I need to initialise them somehow?

# gpart create -s GPT /dev/ada2
gpart: geom 'ada2': Operation not permitted
# gpart create -s GPT /dev/ada3
gpart: geom 'ada3': Operation not permitted


:(
 
As I mentioned before: the command won't be usable when a drive ("storage space") appears to be in use but you can override this behavior. When in doubt about a command then always read its manualpage (so: man zpool to give you access to the zpool(8) manualpage).

Alas: try using the -f parameter to enforce usage of those storage devices.
 
Thanks for the additional inputs. To the best of my knowledge ada2 and ada3 are not in use. Using -f makes no difference.
 
Found what seems like the solution on a FreeNAS forum: Unable to create volume

# sysctl kern.geom.debugflags=0x10
kern.geom.debugflags: 0 -> 16


This works now:

# gpart create -s gpt ada2
ada2 created
# gpart create -s gpt ada3
ada3 created


# zpool create cloud raidz /dev/ada0 /dev/ada1 /dev/ada2 /dev/ada3

Now:
$ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ada4p2 104G 16G 80G 17% /
devfs 1.0K 1.0K 0B 100% /dev
cloud 5.1T 128K 5.1T 0% /cloud


So it seems to work, also after a reboot.

FWIW, I'm still seeing RAID stuff in dmesg:

GEOM_RAID: Promise: Array Promise created.
GEOM_RAID: Promise: Disk ada2 state changed from NONE to ACTIVE.
GEOM_RAID: Promise: Subdisk RAID1:0-ada2 state changed from NONE to ACTIVE.
GEOM_RAID: Promise: Disk ada3 state changed from NONE to ACTIVE.
GEOM_RAID: Promise: Subdisk RAID1:1-ada3 state changed from NONE to ACTIVE.
GEOM_RAID: Promise: Volume started.
GEOM_RAID: Promise: Volume RAID1 state changed from STARTING to OPTIMAL.
GEOM_RAID: Promise: Provider raid/r0 for volume RAID1 created.
GEOM: raid/r0: corrupt or invalid GPT detected.
GEOM: raid/r0: GPT rejected -- may not be recoverable.


Don't know where this is coming from. I hope it is safe to ignore.
 
That GEOM_RAID error is not safe to ignore, you have some leftover metadata from an old Promise SoftRAID array on your disks and it's being picked up by the graid(8) driver.

Ideally you would zero your disks completely with dd(1) and reinstall from scratch but you can get rid of the message by putting this line to your /boot/loader.conf and rebooting:

Code:
kern.geom.raid.enable=0

This will prevent the drives from being probed for RAID metadata and you won't see the error messages because the driver won't be initialized. The handbook has a chapter on how to properly remove the metadata from the drives:

https://www.freebsd.org/doc/handbook/geom-graid.html
 
Thanks for this. I hadn't started copying data to the box yet, so I was able to get rid of the old raid stuff properly.
 
Back
Top