Other What is the minimum component required to enable NVMe

I am working on embedded project based on freebsd.
It is looking into having a NVMe device.

I am newbie for freebsd. Based on the handbook, the Freebsd doesn't support the block layer as Linux does.
I am assuming the file system driver (example, ext2fs) will be able to mount with NVMe driver directly.
But, there is forum mentioned GEOM, NVD and NVMe driver is needed.

Since, it is a embedded system. I am thinking to build the system with minimum component needed.
NVMe device is used mainly to enjoy the random access. Hence, the raid, encryption, compression and etc are not needed.

Hope someone can provide advise on this matter.
 
Code:
SYNOPSIS
     To compile this driver into your kernel, place the following line in your
     kernel configuration file:

           device nvme

     Or, to load the driver as a module at boot, place the following line in
     loader.conf(5):

           nvme_load="YES"

     Most users will also want to enable nvd(4) to expose NVM Express
     namespaces as disk devices which can be partitioned.  Note that in NVM
     Express terms, a namespace is roughly equivalent to a SCSI LUN.

DESCRIPTION
     The nvme driver provides support for NVM Express (NVMe) controllers, such
     as:

     o   Hardware initialization

     o   Per-CPU IO queue pairs

     o   API for registering NVMe namespace consumers such as nvd(4)

     o   API for submitting NVM commands to namespaces

     o   Ioctls for controller and namespace configuration and management

     The nvme driver creates controller device nodes in the format /dev/nvmeX
     and namespace device nodes in the format /dev/nvmeXnsY.  Note that the
     NVM Express specification starts numbering namespaces at 1, not 0, and
     this driver follows that convention.

From nvme(4)
 
Nothing special required for M.2 NVME usage. They show up as nvd0, nvd1 ect.
I have used a Supermicro card with 2 cards bifurcated on an 8X slot and bought lots of the ebay slot adapters.
https://www.supermicro.com/products/accessories/addon/AOC-SLG3-2M2.cfm
.
I found some x4 PCIe adapters that work in a 1U case for a nice array. Most are too tall or need a riser.
https://www.ebay.com/itm/192542865283

I have geom RAID'ed them together in all sorts of ways to experiment as I have bought some 10G networking gear and want to test it out.

I went with 5 Toshiba XG3's and I also bought a pair of Samsungs. I have mirrored them, striped them and graid3'ed them.
They act no different than any disk drive on FreeBSD. Just way faster.

I do feel as though the M.2 modules are a poor mans NVMe. The enterprise 2.5" drives probably deliver way more IOPS.
 
Hi SirDice,

If I am planning to use the file system (example, ext2fs), is the configuration below is enough?
Does the file system expect to be mount on nvd disk, or can it mount directly on the nvme device?

PS: ZFS is too big for my system.

Code:
SYNOPSIS
     To compile this driver into your kernel, place the following line in your
     kernel configuration file:

           device nvme

     Or, to load the driver as a module at boot, place the following line in
     loader.conf(5):

           nvme_load="YES"

     Most users will also want to enable nvd(4) to expose NVM Express
     namespaces as disk devices which can be partitioned.  Note that in NVM
     Express terms, a namespace is roughly equivalent to a SCSI LUN.

DESCRIPTION
     The nvme driver provides support for NVM Express (NVMe) controllers, such
     as:

     o   Hardware initialization

     o   Per-CPU IO queue pairs

     o   API for registering NVMe namespace consumers such as nvd(4)

     o   API for submitting NVM commands to namespaces

     o   Ioctls for controller and namespace configuration and management

     The nvme driver creates controller device nodes in the format /dev/nvmeX
     and namespace device nodes in the format /dev/nvmeXnsY.  Note that the
     NVM Express specification starts numbering namespaces at 1, not 0, and
     this driver follows that convention.

From nvme(4)
 
If I am planning to use the file system (example, ext2fs), is the configuration below is enough?
You need nvme(4) and nvd(4), both are already in the GENERIC kernel. So you don't have to do anything.

Does the file system expect to be mount on nvd disk, or can it mount directly on the nvme device?
Code:
Most users will also want to enable nvd(4) to expose NVM Express
     namespaces as disk devices which can be partitioned.  Note that in NVM
     Express terms, a namespace is roughly equivalent to a SCSI LUN.
The nvd(4) devices are used like any other disk, nvme(4) is the controller. Much like ahci(4) is the controller and da(4) the disk for a typical SATA configuration.
 
Thanks for being patient :)

How about GEOM, can we left it out?
I checked the code, it appears to be providing a bunch of entreprise feature: RAID, encryption, and etc.
Can I get a confirmation, that only nvme, nvd, and file system needed to enable nvme as basic IO storage?


You need nvme(4) and nvd(4), both are already in the GENERIC kernel. So you don't have to do anything.


Code:
Most users will also want to enable nvd(4) to expose NVM Express
     namespaces as disk devices which can be partitioned.  Note that in NVM
     Express terms, a namespace is roughly equivalent to a SCSI LUN.
The nvd(4) devices are used like any other disk, nvme(4) is the controller. Much like ahci(4) is the controller and da(4) the disk for a typical SATA configuration.
 
You need nvme(4) and nvd(4), both are already in the GENERIC kernel. So you don't have to do anything.


Code:
Most users will also want to enable nvd(4) to expose NVM Express
     namespaces as disk devices which can be partitioned.  Note that in NVM
     Express terms, a namespace is roughly equivalent to a SCSI LUN.
The nvd(4) devices are used like any other disk, nvme(4) is the controller. Much like ahci(4) is the controller and da(4) the disk for a typical SATA configuration.

With a small correction that the "disk" driver would be ada(4) for the SATA.
 
I noticed it as I wanted to mention that I have started to use nda(4) instead of nvd(4) during 12-CURRENT time (not sure if it was merged back to any 11.x):
Code:
# camcontrol devlist
<INTEL SSDPEKKW128G8 004C>         at scbus1 target 0 lun 1 (pass0,nda0)

This can be activated using the following sysctl at the moment: hw.nvme.use_nvd=0
 
Yea I saw that in the lists last month. I don't fully understand the nvme namespace device and the nvd device.
What does nda do differently?
 
It's just another NVMe frontend, and it seems more logical to make it a CAM subsystem. For me it means it will (and does) receive more attention and updates.
 
Back
Top