Big SSD advice for FreeBSD in 2024

I bought a handful of Silicon Power 2tb ssd's a while back (got rid of them after a couple rounds of SP replacing them under warranty) and they were slow as molasses in my MacBook Pro, my Lenovo T430, my Lenovo Y500, externally connected via USB 3, etc. So, I kind of gave up on big SSDs and bought a handful of Samsung 870 EVO 1TB drives. They worked swell, but a couple failed - ick!!! Anyway, I'm wanting to reconsider my next purchase and was wondering what y'alls experience with SSD's was like. Are they all they're cracked up to be, in your experience? Which ones >1TB have you had good success with?
 
FWIW, I only use SSD for system volumes, not for Data.
I place Samsung models in my own, and my client machines.
I skip the QVO or whatever they call the cheap-seat drives.

SSD has a finite number of writes before a given cell fails.
This is a very high number, but if you have a chatty operating system such as Windows, the number of writes quickly mounts up.
For this reason, I move the paging/swap files to a physical disk.
 
I used multiple 250GB/512GB/1TB/2TB/4TB/8TB Samsung QVO and EVO drives - both mSATA and SATA 2.5 - 840/850/860/870 models - and they all worked well for me - not a single one failed.
 
I’m wondering if I’m cursed. I’ll have to think some more. On the sata vs adding an nvme card - whatcha think?
 
I’m wondering if I’m cursed. I’ll have to think some more. On the sata vs adding an nvme card - whatcha think?
NVMe - not because of the SSD itself but the bus speed between it.

I have a really old (2007) desktop and the performance improves dramatically when I boot from a SATA SSD (because the BIOS/EFI doesn't recognise NVMe PCIe cards) and then mounts and uses the NVMe disk over PCIe. Bypasses all of the SATA 2 crap (or at least that's what I tell myself and it makes me feel better)
 
I was actually wondering myself, I have a few machines, one of them being a Macbook with soldered NAND chips - it was very expensive and I want it to last. Should I be preserving it by using a different machine for the humdrum stuff like internet browsing and youtube, and saving it for the bigger faster tasks that I bought it for, or does it ultimately not matter that much? I'm hoping to get at least five years out of it, any less than that I'll be disappointed, bitterly.
 
Should I be preserving it by using a different machine for the humdrum stuff like internet browsing and youtube, and saving it for the bigger faster tasks that I bought it for, or does it ultimately not matter that much? I'm hoping to get at least five years out of it, any less than that I'll be disappointed, bitterly.
The answer to this is almost certainly 'no'; these things are made to be used, so why wouldn't you use it in order to 'save' it?

This kind of thing is usually the product of (understandable, given the cost) FUD...and the best way to address it is to substitute the FUD with real data. Run something like sudo smartctl -a /dev/disk0 in terminal on your Mac and look for the 'Data Units Written' figure. From this, you can make a real judgement of how 'used' the NVME is.

As a very rough rule of thumb, these drives are at least good for a *minimum* of ~600TB written per TB of storage, based merely on the warranties most TLC consumer drives come with. IN practice, they'll usually go well beyond this figure.

Fwiw my M1 Pro's NVME drive is made by Sandisk.

hth
 
I’m wondering if I’m cursed. I’ll have to think some more. On the SATA vs adding an NVMe card - whatcha think?

Both work (SATA/NVMe).

The NVMe will be just slightly faster - both on transfer rate and lower latency - but if You attach Samsung QVO type drive there - there would be SMALL difference.

IMHO NVMe makes sense if you attach REALLY fast SSDs there.
 
The answer to this is almost certainly 'no'; these things are made to be used, so why wouldn't you use it in order to 'save' it?
I understand your viewpoint, but I bought it for a particular purpose (development...) and I have other machines that I can browse the internet and do the generic brainless stuff on just as well.

A bit like using a Ferrari to go to the shops and back. The wear and tear on the Ferrari, and a Ford Focus can do the same job just as effectively but parts and repairs are substantially cheaper.

At the risk of sounding like a fanboy by comparing Apple and Ferrari, I'll clarify I can't stand Ferrari's. I don't particularly like the Macbook either, but Chromium/VSCode is such a dense tumour it doesn't run all that great on my fairly ancient but much more likeable (and cost-effective) hardware. I tried transferring to emacs but the key bindings made me depressed.
 
MLC technology has the finite write cycles.
SLC does not, but is far more pricey.

If I was going to go NVMe, I would make certain there is an air cooling stream directed at it.
These seem to run hot, and heat is the enemy of all electronics.

The very few that have bricked on me (2, IIRC) was out of the blue, Samsung premium SSD, and sudden death.
In my computer business, a customer call-back blows any income I made from the installation, so I use premium parts.
 
My last dead SSD also was a Samsung.

There's also this issue with 870s:
 
My last dead SSD also was a Samsung.

There's also this issue with 870s:
...and 980Pros: https://www.tomshardware.com/news/samsung-980-pro-ssd-failures-firmware-update

...and 990Pros: https://arstechnica.com/gadgets/202...-for-rapid-failure-issue-in-new-990-pro-ssds/

Samsung's firmware QC processes seem to leave something to be desired of late...:rolleyes:
 
MLC technology has the finite write cycles.
SLC does not, but is far more pricey.
SLC also has finite write cycles, typically 10-100 thousand. Really good drives (with overprovisioning by a factor of 3 and carefully tuned internal garbage collection) can go higher. MLC (meaning 2 bits per cell) is typically several thousand (educated guess is 3K to 10K). With TLC (3 bits per cell) and QLC (4 bits per cell) it goes down further, although for enterprise-grade devices that is then ameliorated by smarter FTLs and workload optimization. For consumer-grade TLC/QLC devices, my educated guess would be that if you reach 1000 write/erase cycles, you're lucky.

I've seen workloads that can reliably kill the best available SLC enterprise drives within 3-4 years, and I've seen workloads in which QLC drives are barely exercised after 7 years (when they typically get retired due to economic lifetime). In practice, this is exceedingly workload dependent. And please remember: It's not the individual writes that do you in, it's the erase cycles. If your writes are highly fragmented, you might have "write amplification" that further reduces lifetime.
 
I'm not an SSD engineer, so cannot confirm nor deny the above.
My understanding is that MLC has the above finite write issue.

To my understanding, there is no difference between a Write and an Erase.
The cell is cleared of data either way.

In the Microsoft arena, a regular erase is only a bit flip in the file table.
Secure erase does indeed write to each address, and multiple times.

The D.O.D. secure erase is quite vigorous on the amount of writes it does.
Before that was approved for main frames, we had to destroy $30,000 disk assemblies by sledge hammer when they were leaving a secure military site.
That got to be rather expensive.

The intent of the above is avoiding SSD for chatty operating systems by moving the TEMP directories to a physical disk.
 
My last dead SSD also was a Samsung.

There's also this issue with 870s:
I am very glad I am now retired.
 
To my understanding, there is no difference between a Write and an Erase.
To explain the difference, I need to explain a little bit how SSDs (or in general flash) works. The two relevant sizes here are the interface block size (the smallest unit of data that can be read or written by the user on the interface, usually 512 or 4096 bytes), and the internal erase block size (often dozens or hundreds of kByte or low MByte). To write one interface block of data, the SSD internally looks for an erase block that has free space in it, and then writes the new data there. If that data is overwritten, it does the same thing, writing typically somewhere else. This means that on an SSD, there are typically multiple copies of each interface block stored, but only one is the most recent and actually readable. Obviously, this is space inefficient, so once in a while, the drive's internal software (called the FTL = Flash Translation Layer) performs GC = Garbage Collection: It finds erase blocks that have very little current data, copies the data still needed to other places, and then erases the whole block.

The underlying reason for this complexity is that in flash, individual interface blocks can be written only once after an erase. To rewrite them multiple times, the erase block has to be completely wiped clean first. Typically, the operation that limits endurance is the erase, not so much the write, and even less the read.

Now you can see that the number of erase operations needed per write depend crucially on the workload. If the workload first writes a lot of data to different block addresses (never overwriting anything), afterwards erases everything at once (with a trim operation), and finally starts the cycle again, then the total amount of erase is equal to the total amount of write. On the other extreme, if the workload first completely fills the SSD with data, then overwrites only 10% of it in completely scattered places, then every erase block will contain mostly valid data; to perform GC, on average you need to do 10x as many erase operations.

The D.O.D. secure erase is quite vigorous on the amount of writes it does.
Before that was approved for main frames, we had to destroy $30,000 disk assemblies by sledge hammer when they were leaving a secure military site.
That got to be rather expensive.
Today, in security and privacy conscious places, there are two approaches to securely erase disk drives (both SSD and spinning rust). The first one is to use encryption, and the standard example is called SED = Self Encrypting Drives. Here each disk drive has an encryption key, which is used whenever data is written. To erase the disk, just change its encryption key, and all the data on it becomes gibberish. If you do that, then the end user (say the NSA or Amazon) has to trust the disk manufacturer (Seagate or WD) to implement encryption correctly. For that reason, actually secure facilities (for example the big cloud providers) physically destroy disks (and most other computer hardware) by running it through shredders. I know that for SSDs, there is some specification of the maximum size piece that can come out of a shredder, it is several mm. This is the high-tech version of the sledge hammer of the old days.

Anecdote: At Los Alamos of the old days, the marines were trained in how to shoot disk drives in case of an attack; they even had wire holders attached to the (washing-machine sized) disk drives that one could put a 1911 pistol into, so it aimed at the correct place.

The intent of the above is avoiding SSD for chatty operating systems by moving the TEMP directories to a physical disk.
Absolutely. Although for amateurs, this typically matters little: the total amount of data going to /tmp is very small, and doesn't justify adding a spinning disk. On the other extreme, there exists temporary data (like database transaction logs) that need to be stored with very low latency, and therefore go onto SSDs and other flash storage, even if that damages the SSDs; that's part of the cost of doing business. And today, spinning disks also have write endurance limitations. So the whole idea "disks can be overwritten all the time, SSDs should not be overwritten at all" is an over-simplification.

For most small business and amateur users, none of this makes any difference.
 
ralphbsz interesting, thanks. So typically if I'm repurposing a disk I already have, I might dd /dev/zero onto it before the new installation. Is that substantially damaging? I'm just an amateur user so the biggest risk is probably my can-ful of Coca-Cola going through the keyboard
 
ralphbsz interesting, thanks. So typically if I'm repurposing a disk I already have, I might dd /dev/zero onto it before the new installation. Is that substantially damaging? I'm just an amateur user so the biggest risk is probably my can-ful of Coca-Cola going through the keyboard
You are going to overwrite (=erase) the capacity of the disk once. Given that modern consumer disks have a write endurance of <1000 cycles, you just destroyed 1/1000th of the life expectancy of your device. Is that relevant? If I had told you that you destroyed 90% of it, it would be relevant.

And since you are writing the whole disk sequentially at once, there will be no write amplification; the FTL can clear whole blocks once.

But honestly, why bother? I would just overwrite the first and last few MB (the partitions tables and their backups), done.
 
You are going to overwrite (=erase) the capacity of the disk once. Given that modern consumer disks have a write endurance of <1000 cycles, you just destroyed 1/1000th of the life expectancy of your device. Is that relevant? If I had told you that you destroyed 90% of it, it would be relevant.

And since you are writing the whole disk sequentially at once, there will be no write amplification; the FTL can clear whole blocks once.

But honestly, why bother? I would just overwrite the first and last few MB (the partitions tables and their backups), done.
Definitely not relevant for individual NVMe sticks, but the relevance increases slightly (although not much) when the SSD is soldered onto the motherboard of a $5000 laptop, and that such laptop will not allow booting from other devices for security reasons, so effectively the machine is dead (unless you pay Apple to fix it). Knowing that might change my habits for using it for such things as playing (contextually: downloading 100GB+) video games, and using a dramatically cheaper (and easier to replace) console instead.

You're right though. My thoughts are skimming the realms of hypothesis and paranoia.
 
Back
Top