I'm using a Samsung SM951 128GB NVMe in my desktop machine since I bought it last December and a Samsung 850 Evo 500GB in my Laptop (which only has a SATA interface on M.2) since ~3 Months.
Both systems running PC-BSD/TrueOS; on the Laptop directly off the NVMe-Drive, on the Desktop with the migration from PC-BSD to TrueOS I switched to using it as L2ARC and ZIL for the ZFS pool. I chose this method so I can get rid of the noisy seagate drives and use 2 really quiet WD Green HDDs for the local pool. Stuff that should load fast on first try (VM images, games etc) reside on FC-targets on my storage system, so using the NVMe as cache was the better choice for me on that machine. As the machine is mostly just suspended the cache is practically full all the time and hit rates on 'daily business' are pretty good, so most stuff is served from the SM951. As soon as the persistent L2ARC feature hits CURRENT and is merged to TrueOS this should improve even more as the cache can then optimize over really long timeframes without being wiped on reboots.
I also still have a (mostly unused) FreeBSD 10.3 installation on the Desktop, but as it is a skylake system, i've gone with TrueOS to get "everything" working OOTB. I didn't had any issues installing to or booting from these drives, regardless if it was FreeBSD, PC-BSD or TrueOS.
The speed improvement on the SM951 is immense, even compared to a fast SATA-SSD. Bootup times are drastically shortened, as well as program startup times if everything resides on the NVMe. If you have to do heavy file lifting on a daily basis there's nothing that gives you more 'bang for the buck' than an NVMe - regardless what size.
The Drives appear just as any other blockdevice at the setup - no black magic or waving of dead chickens necessary. The only caveat I encountered on the Laptop: the BIOS/Legacy mode has to be fully disabled, switching to UEFI-only mode to get the system boot from the M.2 drive and actually loading the EFI loader. In BIOS-Mode the system sometimes fails to recognize the drive as a boot device and if it does, it won't really boot from it. I suspect as the screen switches from native resulution to blurry-mode just after POST, the system switches back to BIOS-mode but tries to run the selected EFI-loader.
In UEFI-mode it takes ~10-15sec to load and hand over to the bootloader. Same thing I already encountered on my desktop machine, which takes up to 30sec to show the POST-Screen... Seems to be the price for stuffing those heavily bloated UEFI/BIOS-hybrids on slow EEPROMs. So the OS boots in way under 10 seconds (~4-5sec on my desktop to login screen), but the BIOS/UEFI puts you back to overall boot times like 10 years ago...
So long story short: FreeBSD shouldn't give you any headaches with NVMe, but prepare for broken/crappy/weirdly behaving BIOS/EFI implementations that need some special treatment and might be really slow to load in UEFI-mode.
IIRC these modules spread the I/O over all chips - so bigger NVMe with more chips get higher throughput and IOPS.
The M.2 modules with ratings around 500MB/s use a SATA Controller, taking away all benefits of NVMe and keeping all the drawbacks of SATA: slower speeds, singe queue and pretending to be a dumb spinny disk from the 80s... So you only want these modules if your system doesn't support NVMe on its M.2 slot like many older or cheaper laptops. The only benefit is the smaller formfactor.
The M.2 SATA disk in my laptop shows up as a normal adaX device, the NVMe disk on the desktop appears as /dev/nvdX and /dev/nvmeX.
nvdX is the actual geom/block device - this is what you use for partitioning or creating a ZFS vdev. The nvmeXnsY ones are control devices, used by nvmecontrol.
Check the manpages for nvd, nvme and nvmectl. They explain the specialties of NVMe, the chosen defaults and also point to tunables and configurations you might want/need to change for special workloads (e.g. number of I/O queues).
I bought this drive mainly because in December last year NVMe drives were _much_ more expensive and very few models were actually available from stock - and because my old desktop died I needed the replacement quickly. IIRC the 128GB module was a bit over 150 EUR back then (the 256GB SM961 is at 140 EUR today) and I already was way over 2000EUR for the whole new system and already had to replace my storageserver 2 months before. Today as an upgrade I'd go for a 512GB module and use it as the primary/only disk on a desktop system.
The intel 600p has relatively low specs - I recon they are primarily for the average homeuser and cheap OEM systems.
As you want tho use the drive for performance tests, I'd go for the model that gives best R/W or IOPS performance (depending on what you will benchmark) for your available budget.
They are saying 512GB for $329. I am going to wait until I get my PCIe adapter from China.
The PM961 module I talked myself into had list price of $159. so a $50 dollar premium for OEM. Not too wise.
I see the write speed for the PM961 listed as 700MB/s for the 128GB module and 1400MB/s for the 256GB module. A great deal of difference. I hope that does not translate to the 960 PRO model as well as 512GB seems to be the smallest sized module. Many marketing materials seem to quote the top models speeds, which is bad for shoppers.