The Internet nowadays is full with millions of texts about the life-span of SSDs, while indeed none of them gives any useful information, and most are outright crap. Such stuff a priest could talk for the sermon, without giving any answers.
And the manufacturers are no better. I'm not interested in TBW values where nobody knows to which size of drive they would apply. I'm even less interested in throughput speeds, as they don't vary much, and don't have much impact.
I'm only interested in one single number: how many writes? Strangely, nobody gives that number. I would like to know exactly why I might pay X times the price for an enterprise class piece, and exactly what I would get as added value.
I take it as granted that a wear-levelling algorithm does achieve all cells are written about equally often. That is no magic, that's just a finite algorithm. So, given the quality of the chips, and the overhead of that algorithm, there must result that one single number N: how often can the amount of capacity be written to the drive? But what I get instead of that number is marketing babble of the worst kind. I don't want to know how many GB an "average user" will write per day, as the statistics from my machine do already tell me that.
I basically don't want manufacturers telling me lots of assumptions about my behaviour instead of telling me facts about their product's behaviour!
Some five years ago I bought my first small SSD, just out of curiosity and for testing. A few weeks later, without having done much, the piece was dead-in-the-water. Nicely, I got it replaced by double capacity (and that replacement still works today). I put it in the desktop as a disk caching, and didn't notice any improvement by that - might be that a desktop does not do much repeated read: program startup is initial read, and then most of the important things stay in memory.
So, later I put it into my old server machine - and that made an improvement. It is not that the SSD would read faster (because that machine does not even reach the throughput speed of a spinning drive). It is the absence of seek times that does kick ass: Even modern spinning drives will have a track-to-track seek time >1ms, and that means that at every track change at least 2 to 20 millions of cpu instructions are wasted in busy-waiting. (Unless you do bitcoin mining in parallel, but then also your tasks will not accomplish faster.)
So, my strategy became to leave the big media files that are sequentially read on the spinning drives, and put the small and mostly fragmented data on SSD: OS installations, mailboxes, databases, web caches, ... Unfortunately that is also the data that is most often written, so the write counts on my drives get much higher than the read counts, and therefore wear is a concern.
Now lets look at the details: here are two drives, apparently same brand and series (and in fact cheap consumer pieces):
Drive 1:
Drive 2:
Drive 1 was bought three years ago, drive 2 this year.
Lets do the math:
Drive 1:
Line 231 is just the same as line 244, transformed per
We can calculate the algorithm overhead: 22599 GB (line 241) / 120 GB = 188. (Or flash writes: 26217 GB (line 233) / 120 GB = 218.) The other value,
The algorithm overhead then figures as (238/188)-1 = 27%. In other words, one can write the drive 791 times.
That is fine with me, and so I bought the other piece.
Drive 2:
But here, things are very different.
Line 231 is still the same as line 244, so the erase-count target still seems 1000.
But then the overhead is
The
And the finally interesting number N gives: 248 times writeable!
This very much looks like somebody at the manufacturer's have decided that 1000 times writeable is far too much for a consumer drive, and changed the algorithm accodingly to spend the drive in 1/4 of the time (while the chips might be just all the same).
There is a name for such: we call these kind of 'improvements' planned obsolescence.
Addendum:
Doing some research and querying about the Kingston company lets the phenomenon get a bit clearer:
Kingston has a long-standing history offering unlimited lifetime warranty for their storage products. With flash memory products which are built to decay by design, such a corporate philosophy obviousely cannot be kept up. Current documents from Kingston now give the interesting number N as 333 for mentioned models, which is similar to other brands' equivalent products.
It is also worth remarking that the SMART data from these Kingston drives are quite intellegible (which is not so true for certain other brands), as otherwise one could not even observe such changes.
Over all, I certainly will continue to buy Kingston memory.
And the manufacturers are no better. I'm not interested in TBW values where nobody knows to which size of drive they would apply. I'm even less interested in throughput speeds, as they don't vary much, and don't have much impact.
I'm only interested in one single number: how many writes? Strangely, nobody gives that number. I would like to know exactly why I might pay X times the price for an enterprise class piece, and exactly what I would get as added value.
I take it as granted that a wear-levelling algorithm does achieve all cells are written about equally often. That is no magic, that's just a finite algorithm. So, given the quality of the chips, and the overhead of that algorithm, there must result that one single number N: how often can the amount of capacity be written to the drive? But what I get instead of that number is marketing babble of the worst kind. I don't want to know how many GB an "average user" will write per day, as the statistics from my machine do already tell me that.
I basically don't want manufacturers telling me lots of assumptions about my behaviour instead of telling me facts about their product's behaviour!
Some five years ago I bought my first small SSD, just out of curiosity and for testing. A few weeks later, without having done much, the piece was dead-in-the-water. Nicely, I got it replaced by double capacity (and that replacement still works today). I put it in the desktop as a disk caching, and didn't notice any improvement by that - might be that a desktop does not do much repeated read: program startup is initial read, and then most of the important things stay in memory.
So, later I put it into my old server machine - and that made an improvement. It is not that the SSD would read faster (because that machine does not even reach the throughput speed of a spinning drive). It is the absence of seek times that does kick ass: Even modern spinning drives will have a track-to-track seek time >1ms, and that means that at every track change at least 2 to 20 millions of cpu instructions are wasted in busy-waiting. (Unless you do bitcoin mining in parallel, but then also your tasks will not accomplish faster.)
So, my strategy became to leave the big media files that are sequentially read on the spinning drives, and put the small and mostly fragmented data on SSD: OS installations, mailboxes, databases, web caches, ... Unfortunately that is also the data that is most often written, so the write counts on my drives get much higher than the read counts, and therefore wear is a concern.
Now lets look at the details: here are two drives, apparently same brand and series (and in fact cheap consumer pieces):
Drive 1:
ada3: <KINGSTON SA400S37120G SBFK71E0> ACS-4 ATA SATA 3.x device
Model Family: Phison Driven SSDs
User Capacity: 120,034,123,776 bytes [120 GB]
Drive 2:
ada0: <KINGSTON SA400S37240G S1Z40102> ACS-3 ATA SATA 3.x device
Model Family: Phison Driven SSDs
User Capacity: 240,057,409,536 bytes [240 GB]
Drive 1 was bought three years ago, drive 2 this year.
Lets do the math:
Drive 1:
Code:
9 Power_On_Hours -O--C- 100 100 000 - 15685
12 Power_Cycle_Count -O--C- 100 100 000 - 175
231 SSD_Life_Left PO--C- 100 100 000 - 76
233 Flash_Writes_GiB PO--C- 100 100 000 - 26217
241 Lifetime_Writes_GiB -O--C- 100 100 000 - 22599
242 Lifetime_Reads_GiB -O--C- 100 100 000 - 5905
244 Average_Erase_Count ------ 100 100 000 - 238
245 Max_Erase_Count ------ 100 100 000 - 267
246 Total_Erase_Count ------ 100 100 000 - 1394196
0x01 0x018 6 47394438211 --- Logical Sectors Written
0x01 0x028 6 12385445892 --- Logical Sectors Read
Line 231 is just the same as line 244, transformed per
100 - (X / 10)
. So the average-erase-count targets to 1000, and then the SSD-life-left will reach 0.We can calculate the algorithm overhead: 22599 GB (line 241) / 120 GB = 188. (Or flash writes: 26217 GB (line 233) / 120 GB = 218.) The other value,
Logical Sectors Written
, is the same as line 241: 22599 * 1024 ^ 3 / 47394438211 = 512
(sector size).The algorithm overhead then figures as (238/188)-1 = 27%. In other words, one can write the drive 791 times.
That is fine with me, and so I bought the other piece.
Drive 2:
Code:
9 Power_On_Hours -O--CK 100 100 000 - 2995
12 Power_Cycle_Count -O--CK 100 100 000 - 134
231 SSD_Life_Left ------ 087 087 000 - 87
233 Flash_Writes_GiB -O--CK 100 100 000 - 7883
241 Lifetime_Writes_GiB -O--CK 100 100 000 - 7729
242 Lifetime_Reads_GiB -O--CK 100 100 000 - 3349
244 Average_Erase_Count ------ 100 100 000 - 130
245 Max_Erase_Count ------ 100 100 000 - 174
246 Total_Erase_Count ------ 100 100 000 - 55955
0x01 0x018 6 3324330597 --- Logical Sectors Written
0x01 0x028 6 2730222700 --- Logical Sectors Read
But here, things are very different.
Line 231 is still the same as line 244, so the erase-count target still seems 1000.
But then the overhead is
7729 GB / 240 GB = 32
-> (130/32)-1 = 306%
!The
Logical Sectors Written
also do not line up with anything: 7729 * 1024 ^ 3 / 3324330597 = 2496
(sector size).And the finally interesting number N gives: 248 times writeable!
This very much looks like somebody at the manufacturer's have decided that 1000 times writeable is far too much for a consumer drive, and changed the algorithm accodingly to spend the drive in 1/4 of the time (while the chips might be just all the same).
There is a name for such: we call these kind of 'improvements' planned obsolescence.
Addendum:
Doing some research and querying about the Kingston company lets the phenomenon get a bit clearer:
Kingston has a long-standing history offering unlimited lifetime warranty for their storage products. With flash memory products which are built to decay by design, such a corporate philosophy obviousely cannot be kept up. Current documents from Kingston now give the interesting number N as 333 for mentioned models, which is similar to other brands' equivalent products.
It is also worth remarking that the SMART data from these Kingston drives are quite intellegible (which is not so true for certain other brands), as otherwise one could not even observe such changes.
Over all, I certainly will continue to buy Kingston memory.
Last edited: