Partition and file system creation on NVMe based SSD

Hello,
I am trying to create partition and file system, and then mount the device for read/write, on a NVME based SSD. Using FreeBSD 9.2 version. Created a loader.config file in /boot. Added
Code:
nvme_load="YES"
and
Code:
nvd_load="YES"
lines to the loader.config file. After a reboot I see my device getting listed in /dev. It shows /dev/nvme0 and /dev/nvme0ns1.

After this, basic nvmecontrol commands work fine and provide all the details about the device.

Next, I used the below commands to create partition and file system. But I get the errors as shown
Code:
gpart create -s gpt /dev/nvme0ns1  
Error: Invalid argument

gpart create –s gpt /dev/nvme0
Error: Invalid argument

gpart show /dev/nvme0ns1
Error: No such geom.  

newfs /dev/nvme0ns1
Error: reserved not less than device size 0
I followed the same conventions as used for ATA devices as given in some of the examples in freebsd.org. (/dev/ada0). But it does not work for me. Could you please help me to resolve the above errors and let me know how do I give the NVMe based SSD name for the above commands?

--Thanks
Vijay
 
Hello,

Is this the valid link for FreeBSD 9-STABLE version, ftp://ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/9.0/ . I got it from http://lists.freebsd.org/pipermail/freebsd-questions/2012-March/239742.html. If not please let me know the valid link for FreeBSD 9-STABLE.

And, yes, on FreeBSD 9.2, the nvmecontrol devlist shows the devices. I connected two NVMe SSDs and I got both device listed when I used this command.

One more observation: I installed FreeBSD 10.0 also on i386 platform. Then added the two lines,
Code:
nvme_load="YES"
and
Code:
nvd_load="YES"
in loader.conf file in the /boot directory, and rebooted the system. The system keeps rebooting forever. I do not understand what is wrong here. As per the man pages these are the steps to enable the NVMe driver from the loader.conf file.

Please let me know if am doing anything wrong here. I'm not able to test anything using FreeBSD 9.2 or 10.0 on NVMe SSD.

--Thanks
Vijay
 
Hello,

Some more updates. I did download the ftp://ftp.freebsd.org/pub/FreeBSD/relea ... MAGES/9.0/ and installed. Created the loader.conf in the /boot dirextory and added the two lines,
Code:
nvme_load="YES"
and
Code:
nvd_load="YES"
in the loader.conf file in the /boot directory, and rebooted the system. The system keeps reboots successfully. But the nvme0 and nvme0ns1 are not listed in the /dev directory at all.
Any clues?

--Thanks
Vijay
 
Yes, I did use the latest 10.0. I installed FreeBSD 10.0 also on i386 platform. Then added the two lines,
Code:
nvme_load="YES"
nvd_load="YES"
in loader.conf file in the /boot directory, and rebooted the system. The system keeps rebooting forever. I do not understand what is wrong here. As per the man pages these are the steps to enable the NVMe driver from the loader.conf file.

Please let me know if am doing anything wrong here.

--Thanks
Vijay
 
Sorry to revive this long dead thread, but for the next person searching for the answer:

The /dev/nvme0 and /dev/nvme0ns1 devices are only control devices, you do not use them as block devices.

The block device will be /dev/nvd0 etc, for each namespace you create, and those are what you should run gpart against.
 
Hi Allan,

nvme(4) says,

Note that in NVM Express terms, a namespace is roughly equivalent to a SCSI LUN.

So, can the nvme namespace device node /dev/nvme0ns1 can be used as a block device (or) a raw device node (used with --filename parameter while running fio)?
 
So, can the nvme namespace device node /dev/nvme0ns1 can be used as a block device (or) a raw device node (used with --filename parameter while running fio)?

As SirDice mentioned, you're replying to a 5 year old thread. It might be better to open a new one.

The nvme(4) devices can not be used as block devices. This is true for both the controller devices (/dev/nvmeX) and the namespace devices (/dev/nvmeXnsY). They are for configuration and management purposes only. In certain ways they can be thought to be similar to the /dev/xpt* and /dev/pass* devices for SCSI – you cannot use these as block devices either.

To acytually use NVMe storage there are two possibilities: nvd(4) and nda(4). The first (nvd) is the default; it attaches a GEOM disk device to each NVMe namespace, accessible via /dev/nvd*. The second (nda) is rather new; it attaches NVMe devices through the CAM subsystem, accessible via /dev/nda*.

You can switch between the two at boot time with the loader tunable hw.nvme.use_nvd. The default is 1, i.e. use nvd. Set it to 0 to use nda.
 
SirDice and olli@, Thanks for your inputs and sorry about posting in the old thread. I saw the topic more connected, so posted it here.

My base doubt is whether I can use nvme namespace device as a block device. It's answered, "No, we can't use that". Meanwhile, I did same test with both namespace and nvd(4) device. I see the perf numbers a bit better with namespace device compared to nvd(4). So, wondering how this is the case? Is this expected?

Also, Thanks for pointing regarding the nda(4) device. I tried that experiment as well. But I see the numbers very poor compared to others. I did my tests on 11.2, but looks like nda(4) has more changes in 12.0. So, I will give a try with 12.0 and see how it goes.
 
I see the perf numbers a bit better with namespace device compared to nvd(4). So, wondering how this is the case? Is this expected?
I see the same and it don't surprise me. It is a control device.

devices are only control devices,

What I do is benchmark with diskinfo -t and use the actual partitions. I feel it offers real data.
Suprising is swapping around NVMe paddle card to find fastest slots. Makes it worth rudimentary benchmarking.
This is on SM server boards with pleanty of PCIe slots.
I am using the nda driver with a geom mirror of PM953. I might have to try graid3 with the PM983. Those are pretty quick.
I had very good graid3 numbers before with my XG3 drives.

Code:
root@X9SRL:~ # diskinfo -t nvd1p1
nvd1p1
    512             # sectorsize
    960196055040    # mediasize in bytes (894G)
    1875382920      # mediasize in sectors
    0               # stripesize
    1048576         # stripeoffset
    116737          # Cylinders according to firmware.
    255             # Heads according to firmware.
    63              # Sectors according to firmware.
    SAMSUNG MZQLB960HAJR-000AZ    # Disk descr.
    S3VKNE0xxxxxxx    # Disk ident.
    Yes             # TRIM/UNMAP support
    0               # Rotation rate in RPM
Transfer rates:
    outside:       102400 kbytes in   0.052460 sec =  1951963 kbytes/sec
    middle:        102400 kbytes in   0.049521 sec =  2067810 kbytes/sec
    inside:        102400 kbytes in   0.065341 sec =  1567163 kbytes/sec
 
Sorry to revive this long dead thread, but for the next person searching for the answer:

The /dev/nvme0 and /dev/nvme0ns1 devices are only control devices, you do not use them as block devices.

The block device will be /dev/nvd0 etc, for each namespace you create, and those are what you should run gpart against.
even after 3 years it was exactly what I was looking for. thanks for the answer!!
 
Back
Top