cy@
Developer
I replaced my laptop's 1 TB HGST spinning disk drive with a 1 TB Samsung 870 EVO in June. The boot partitions (yes three of them) are UFS and data on a large ZFS pool, with a rarely used 100 MB Windows partition.
I noticed the other day that even though the SSD is only about 3-4 months old smartmontools reports over 2.5 TB already written. This is on a laptop I do some git work on while the majority of my heavy lifting is done by my machines downstairs. 2.5 TB seemed excessive for 3-4 months of relatively light to moderate use.
Comparing two smartctl snapshots of S.M.A.R.T. attribute 241 over 24 hours I noticed close to 30 GB written. That's excessive over a 24 hour period.
Today, so far, 1.6 GB.
What did I do?
Two things: set sync=none and atime=off. Indeed sync=standard does limit the loss of data to the last write whereas sync=none might result in the loss of the last 100 writes. But, I'm not sure I'm willing to replace an SSD after only a year of use.
My ashift is already 12 -- zpool create did a good job of automatically selecting a large enough ashift based on the real LBA size rather than the size advertised by the SSD, though 13 might have been better -- Samsung hasn't published their blocksize.
ZFS compression will also reduce bytes written, though I already have LZ4 enabled. But those who don't should when using SSD/NVMe.
Since then I've re-enabled sync=standard and added a log device, a less expensive SD card, which can be replaced at the fraction of the cost.
(I think UFS would be a better general choice to reduce SSD/NVMe wear. The UFS journal only tracks deletes as softupdates will order writes to maintain consistency, except for deletes.)
So far so good.
If anyone else is noticing excessive SSD or NVMe wear using ZFS, I'd like to hear your stories too.
I noticed the other day that even though the SSD is only about 3-4 months old smartmontools reports over 2.5 TB already written. This is on a laptop I do some git work on while the majority of my heavy lifting is done by my machines downstairs. 2.5 TB seemed excessive for 3-4 months of relatively light to moderate use.
Comparing two smartctl snapshots of S.M.A.R.T. attribute 241 over 24 hours I noticed close to 30 GB written. That's excessive over a 24 hour period.
Today, so far, 1.6 GB.
What did I do?
Two things: set sync=none and atime=off. Indeed sync=standard does limit the loss of data to the last write whereas sync=none might result in the loss of the last 100 writes. But, I'm not sure I'm willing to replace an SSD after only a year of use.
My ashift is already 12 -- zpool create did a good job of automatically selecting a large enough ashift based on the real LBA size rather than the size advertised by the SSD, though 13 might have been better -- Samsung hasn't published their blocksize.
ZFS compression will also reduce bytes written, though I already have LZ4 enabled. But those who don't should when using SSD/NVMe.
Since then I've re-enabled sync=standard and added a log device, a less expensive SD card, which can be replaced at the fraction of the cost.
(I think UFS would be a better general choice to reduce SSD/NVMe wear. The UFS journal only tracks deletes as softupdates will order writes to maintain consistency, except for deletes.)
So far so good.
If anyone else is noticing excessive SSD or NVMe wear using ZFS, I'd like to hear your stories too.