Solved NVMe drive not detected

I just installed a Crucial P3 Plus 500GB NVMe drive in my Lenovo Thinkcentre M710q tiny server (which crucial's website says is supported in this machine), alongside my 480GB SATA SSD. I can see both drives in BIOS, but FreeBSD (14.2-RELEASE) can only see the SSD. Is there some secret sauce/missing package that needs to be installed to enable this drive? I'm using the bog-standard kernel, so the nvd driver is built in, but FreeBSD sees the SATA drive, but doesn't see the NVMe.

Code:
# camcontrol devlist
<CT480BX500SSD1 M6CR056>           at scbus0 target 0 lun 0 (pass0,ada0)

dmesg sees /dev/nvme0, which I presume is the controller, but no disk. Is there some package I am missing? Some secret sauce to get it to get the drive detected so I can add it to the pool?
 
I'm looking at hw-probes of Lenovo Thinkcentre M710q systems and I see a lot of nvme drives so it's a good start. Have a look yourself maybe there is a Crucial on that list (click on the last column called probes, click on ID, click on Diskinfo). even if the OS is reported as 'OPNsense' it's still built on top of FreeBSD so it should be useful.

try these

nvmecontrol devlist
nvmecontrol identify nvme0

this should pick up the drive, for example

# nvmecontrol identify nvme0
Controller Capabilities/Features
================================
Vendor ID: 144d
Subsystem Vendor ID: 144d
Serial Number: S4EVNF0M495440L
Model Number: Samsung SSD 970 EVO Plus 500GB
Firmware Version: 2B2QEXM7
[..]

did you try to boot a Linux livecd to rule out a hardware problem?

btw, in the end consider adding your system to the database (via sysutils/hw-probe) in order to help out others.
 
Doesn't creating partitioning scheme on nvme0 using gpart(8) like
gpart create -s GPT /dev/nda0
make /dev/nda0 appear? (Would be no reason to specify MBR, BSD and others recently.)
nda(4) is current default device name for using NVMe devices as disk devices.
Primary default was nvd(4). If whichever /dev/nda0 or /dev/nvd0 exists in conjunction with /dev/nvme0, the card is recognized as disk drive.
nvme(4) is the core (kinda bus) driver, not a disk (block) device driver.

Honestly, I've never did above, as I used NVMe to USB adapter for creating partitioning schemes/partitions, so it was shown as /dev/da0 at the moment for me. And the NVMe card itself is now running as my Root on ZFS drive for stable/14.
 
I'm looking at hw-probes of Lenovo Thinkcentre M710q systems and I see a lot of nvme drives so it's a good start. Have a look yourself maybe there is a Crucial on that list (click on the last column called probes, click on ID, click on Diskinfo). even if the OS is reported as 'OPNsense' it's still built on top of FreeBSD so it should be useful.

Yeah, and I went to the Crucial site and most of the NVMEs in their current (?) inventory are listed as working with the M710q, including the P3 Plus I am trying and even the 4th gen T500s...

try these



this should pick up the drive, for example

No soup...Ran the commands and got:


Code:
[root@erebus ~]# nvmecontrol devlist
[root@erebus ~]# nvmecontrol identify nvme0
nvmecontrol: Identify request failed

[..]

did you try to boot a Linux livecd to rule out a hardware problem?

Have not, but I will do so. It is detected by BIOS, however...I see both drives. And the fact that I am unable to see these drives specifically in FreeBSD is worrisome, since I just bought a brand new Ryzen chip and board, and a pair of 1TB NVMes for it.

btw, in the end consider adding your system to the database (via sysutils/hw-probe) in order to help out others.

How/where do I do this?

Doesn't creating partitioning scheme on nvme0 using gpart(8) like
gpart create -s GPT /dev/nda0
make /dev/nda0 appear? (Would be no reason to specify MBR, BSD and others recently.)
nda(4) is current default device name for using NVMe devices as disk devices.
Primary default was nvd(4). If whichever /dev/nda0 or /dev/nvd0 exists in conjunction with /dev/nvme0, the card is recognized as disk drive.
nvme(4) is the core (kinda bus) driver, not a disk (block) device driver.

Yeah, both give me an invalid argument:


Code:
[root@erebus ~]# gpart create -s GPT /dev/nda0
gpart: arg0 'nda0': Invalid argument
[root@erebus ~]# gpart create -s GPT /dev/nvd0
gpart: arg0 'nvd0': Invalid argument

Honestly, I've never did above, as I used NVMe to USB adapter for creating partitioning schemes/partitions, so it was shown as /dev/da0 at the moment for me. And the NVMe card itself is now running as my Root on ZFS drive for stable/14.

Doesn't NVMe to USB adapter eat up a lot of the speed benefits you get from an NVMe drive?
 
Have not, but I will do so. It is detected by BIOS, however...I see both drives. And the fact that I am unable to see these drives specifically in FreeBSD is worrisome, since I just bought a brand new Ryzen chip and board, and a pair of 1TB NVMes for it.

you can search for systems based on the drive as well. see here. I hope your setup will work in the end.

long shot, but also check you got the latest bios and drive firmware installed. samsung has a magician application that takes care of fw updates, not sure if crucial has something similar.

How/where do I do this?

should be as straight-forward as

Code:
pkg install hw-probe && hw-probe -all -upload

this tool also reports known compatibility problems if any are detected.
 
you can search for systems based on the drive as well. see here. I hope your setup will work in the end.

long shot, but also check you got the latest drive firmware installed. samsung has a magician application that takes care of fw updates, not sure if crucial has something similar.

I hate flashing firmware...Probably because I came up in an era when flashing firmware held a greater-than-zero, if not significant chance of permanently bricking the device. Then again, I was also doing this when you had to calculate horizontal and vertical scan rates for your CRT monitor, and could actually blat that out of existence if you miscalculated. :D

should be as straight-forward as

Code:
pkg install hw-probe && hw-probe -all -upload

this tool also reports known compatibility problems if any are detected.

I have done so, the server in question is here. The one thing i noticed that makes me curious about it whether the SATA controller being in RAID mode might actually be blocking the NVMe device. Since there is only one SATA slot in the machine, I'm wondering if the RAID mode is automagically grabbing the m.2 drive...
 
I have done so, the server in question is here. The one thing i noticed that makes me curious about it whether the SATA controller being in RAID mode might actually be blocking the NVMe device. Since there is only one SATA slot in the machine, I'm wondering if the RAID mode is automagically grabbing the m.2 drive...

RAID mode would not have been my first choice. booting a linux livecd can check such bios misconfigurations as well. I like using this one to check and fix things. just boot it up and run
fdisk -l
. both drives should be shown.

I'd also update the PC bios. Some bios-es need NVME to be actually enabled in the options in order for it to be visible to the OS.
 
Doesn't NVMe to USB adapter eat up a lot of the speed benefits you get from an NVMe drive?
What I do (unless using Windoze on it is somehow mandated) when I purchase a computer is to remove the drive which Windoze is pre-installed and attaching another drive which FreeBSD (basically latest stable at the moment, and was OS/2 until it got EoL'ed at IBM and switched to FreeBSD) is installed.

So what I've done via USB atapter was only installing FreeBSD (base and bare-minimum for building and configuring actual enviromment on it) for brand-new computer on previous daily driver computer.
And the previous one didn't have internal NVMe slot.

This means, slowness is not a problem here. ;)
 
And if once NVMe card can be seen as a disk, installation suceeded, but cannot boot from the NVMe card, you'll need to set lenovofix attribute with something like below.
gpart set -a lenovofix nda0

This applies at least some ThinkPads (not sure all Lenovo PCs are affected or not, and possibly some other manufacturers could have the same problem).

See gpart(8) manpage for details.
 
RAID mode would not have been my first choice. booting a linux livecd can check such bios misconfigurations as well. I like using this one to check and fix things. just boot it up and run
fdisk -l
. both drives should be shown.
Software RAID could be shown as RST (Intel Rapid Storage Technology) in UEFI firmware config menu items to check/switch modes. Possibly hidden in deep hierarchy of menu structures, and RST menu item would have multiple sub items (such as none, RAID0, RAID1,...).
 
RAID mode would not have been my first choice. booting a linux livecd can check such bios misconfigurations as well. I like using this one to check and fix things. just boot it up and run
fdisk -l
. both drives should be shown.

I'd also update the PC bios.
Thanks. I am familiar with System Rescue CD, and I will download the latest and greatest image and burn it to a thumb drive, after we drop off Christmas presents to daughter and her family. :)
 
Software RAID could be shown as RST (Intel Rapid Storage Technology) in UEFI firmware config menu items to check/switch modes. Possibly hidden in deep hierarchy of menu structures, and RST menu item would have multiple sub items (such as none, RAID0, RAID1,...).
Thanks T-Aoki. I rebooted, but get conflicting information from the BIOS. When I go into the RST page, it shows me both the SATA and NVMe drives, under Non-RAID Physical disks.

However, on the System Summary page, SATA Drive 1 shows Hard Disk with the model number, while M.2 drive 1 and 2 show "None". Is there a setting that masks the drive from the summary while making it visible in RST?

I was also going to change the SATA configuration from Intel optane blah blah blah to AHCI, but I got this big red warning saying

Code:
                Attention!
if you change the SATA mode to AHCI you
may not boot the system due to the failure
if Intel RST with Intel Optane function.

Any ideas on what I am actually seeing? Or if it is safe to change the optane thing to ahci and will it show the m.2 drive?
 
Any ideas on what I am actually seeing? Or if it is safe to change the optane thing to ahci and will it show the m.2 drive?
If all the drives are blank (not yet in use), it is harmless.
Unless software RAID and/or Optane support via RST is needed or already installed something on them, you don't need to consider RST.
 
If all the drives are blank (not yet in use), it is harmless.
Unless software RAID and/or Optane support via RST is needed or already installed something on them, you don't need to consider RST.
The SSD has the OS and all the data on it. I wanted to enable the NVMe as a second drive in the pool. Now there was an m.2 memory thing in the slot. The label says "Intel optane memory series / Model: MEMPEK1W016GAL 16GB"...Since it is sitting on my desk, I am guess I'll try turning off optane and see if that gains me any forward progress...
 
The SSD has the OS and all the data on it. I wanted to enable the NVMe as a second drive in the pool. Now there was an m.2 memory thing in the slot. The label says "Intel optane memory series / Model: MEMPEK1W016GAL 16GB"...Since it is sitting on my desk, I am guess I'll try turning off optane and see if that gains me any forward progress...
So you already cannot disable RST completely.
Next to look into would be there's any way to limit which physical drives (which 2.5inch SATA/m.2 NVMe to be included and which not to be).
If it's impossible, you'll need kinda PCIe m.2 NVMe interface which RST cannot control at all and attach the m.2 NVMe SSD on it.
 
So you already cannot disable RST completely.
Next to look into would be there's any way to limit which physical drives (which 2.5inch SATA/m.2 NVMe to be included and which not to be).
If it's impossible, you'll need kinda PCIe m.2 NVMe interface which RST cannot control at all and attach the m.2 NVMe SSD on it.
I have full snapshots available of the server, so I can recover it if it comes to that. I figured on turning optane off and seeing if the system boots, and if it doesn't boot, turning it back on... Or, I can always recover the system from snapshots if the m.2 presents itself.
 
Software RAID could be shown as RST (Intel Rapid Storage Technology) in UEFI firmware config menu items to check/switch modes. Possibly hidden in deep hierarchy of menu structures, and RST menu item would have multiple sub items (such as none, RAID0, RAID1,...).
That fixed it. Apparently, leaving the SSD in RST "mode" held the m.2 slot as an RST memory slot. As soon as I changed the SSD to AHCI and rebooted, first it booted back up faster than it did in RST mode, but sure enough, I now have nda0:

Code:
[root@erebus ~]# camcontrol devlist
<CT480BX500SSD1 M6CR056>           at scbus0 target 0 lun 0 (pass0,ada0)
<AHCI SGPIO Enclosure 2.00 0001>   at scbus6 target 0 lun 0 (ses0,pass1)
<CT500P3PSSD8 P9CR40D>             at scbus7 target 0 lun 1 (pass2,nda0)

So thanks, everyone for your help getting this going...Learn something new every day. :)
 
Back
Top