Solved Mismatched drives in pool?

I am wondering about a mirrored pair in a ZFS pool. I have two Lenovo M710q tiny servers that I use for jail servers. The servers have a 2.5" drive slot, but they also have a SATA/NVMe m.2 slot.

Each server currently has a Crucial 480GB SSD, which currently has FreeBSD 14.2. I also just got a pair of 500GB NVMes. I know if I pool them, it will run at the speed of the slowest device, the SSD. Is it better to give the pools full redundancy and the like, or is hobbling the m.2 drive just not worth it?
Are there any other gotchas with mixing NVMe with SSD?

Thanks, and Merry Christmas,
--vr
 
You can have redundancy, or you can have speed. Your choice.

[Though the NMVe side of a NVMe+SSD mirror should deliver superior read speeds, which would get you redundancy and fast reads.]
 
I know if I pool them, it will run at the speed of the slowest device, the SSD.
That statement is only mostly true. For a mirror: writes will indeed slow down to the speed of the slowest device. That's because all writes have to go to both devices, and eventually the faster one just twiddles its thumbs while the slower one catches up. But for reads, you may actually get some speedup, perhaps roughly to the average or sum of the speeds. That's because reads go to either one or the other, so while the slower one is busy doing one read, the faster one can do a different one.

If you just put them together into a pool without mirroring (in effect striping them), both read and write speeds will usually sort of improve (good for overall throughput); but latency of an individual IO may get longer when you add a slower one (not good for user frustration).

Is it better to give the pools full redundancy and the like, ...
I would always take redundancy if I can get it. The minor issues with speed (a few seconds lost here and there) will seem irrelevant if you need to spend days recovering data from backups and reinstalling your server.
 
That statement is only mostly true. For a mirror: writes will indeed slow down to the speed of the slowest device. That's because all writes have to go to both devices, and eventually the faster one just twiddles its thumbs while the slower one catches up. But for reads, you may actually get some speedup, perhaps roughly to the average or sum of the speeds. That's because reads go to either one or the other, so while the slower one is busy doing one read, the faster one can do a different one.

Excellent! I remember when I first started using FreeBSD and ZFS about 10 or 11 years ago reading that you never wanted to pair SSDs with spinning rust...But this is better.

If you just put them together into a pool without mirroring (in effect striping them), both read and write speeds will usually sort of improve (good for overall throughput); but latency of an individual IO may get longer when you add a slower one (not good for user frustration).


I would always take redundancy if I can get it. The minor issues with speed (a few seconds lost here and there) will seem irrelevant if you need to spend days recovering data from backups and reinstalling your server.
I agree wholeheartedly, which is why this thread came up. This is something, especially on servers, that I absolutely try to avoid.

On the plus side, I do have a very robust system of snapshots. I use ZFStools to do cascading snapshots, 15 minute -> hourly -> daily -> weekly -> monthly. Then I send the snapshots to my NAS boxes. So I could recover fairly quickly, but not having to recover at all is much better than an easy recovery. :)
 
Excellent! I remember when I first started using FreeBSD and ZFS about 10 or 11 years ago reading that you never wanted to pair SSDs with spinning rust...

There should be no problem with that either.

In fact I am in progress of replacing 8 SAS harddrives in a zraid with SSDs. I can only do one or so per week, so the machine runs mixed for months. No problems.
 
There should be no problem with that either.

In fact I am in progress of replacing 8 SAS harddrives in a zraid with SSDs. I can only do one or so per week, so the machine runs mixed for months. No problems.
Yes, I did that with my NAS last year, went from 4TB to 10TB (albeit both rust).

I'm glad to know that this is nothing more than a wives tale.
 
This is something, especially on servers, that I absolutely try to avoid.
On an expensive commercial server, with high load, it would be a bad idea to pay the $$$ for the speed of an SSD, and then get the slowness of spinning rust because it is in the same pool as a traditional disk. On a personal workstation or lightly used server, the loss of performance probably doesn't matter, and you get the convenience.
 
Back
Top