ZFS Installation with ZFS Raid and question about SWAP

Hi everyone,

I am upgrading my current homelab server from 4x 240GB SSD to 4x 1.6TB nvme on my Dell PowerEdge R730xd. I read some information online and it says I am not able to create raid array with Dell Perc + expansion controller on nvme drives therefore I am looking for a solution with software raid. Question to you guys:
- is it worth creating raid array on nvme drives? I was using RAID 10 on 4 SSD drives and I was told it's absolutely pointless because SSD drives are more reliable than HDDs and everytime I am mirroring data I am just shrinking ssd life (it's got certain amount of data writable on device and after that it goes to read only)
- if it's still worth creating RAID 10 what is the correct way of creating RAID 10 during installation using ZFS?
- what is recommended swap space? My setup has 128GB of RAM and I am planning to expand to 384GB. Current setup is 2GB of SWAP space (guided installation) and it turns out I run out of swap space. I did have to create a file and mount it as swap
- how does it work if for example I will change my current server to a newer generation? Will system boot automatically if I move drives to different hardware? When I moved from Dell R430 to R730xd RAID controller asked me to restore array and system booted without any issues, what about when it comes to ZFS Raid?

If RAID 10 or RAID 5 is not recommended with my setup, maybe striping is an option? Not sure on how to or if it's possible on ZFS but this would speed things up I believe. I could use my SAS HDDs for some backup

Thanks,
Seb
 
I was using RAID 10 on 4 SSD drives and I was told it's absolutely pointless because SSD drives are more reliable than HDDs and everytime I am mirroring data I am just shrinking ssd life

Someone told you complete bullshit. SSDs are LESS reliable than HDDs and often die at random times, without any warning. Unlike HDDs, S.M.A.R.T. values in SSDs are worthless and exist mainly for backward compatibility with ATA protocol. Additionally, all members of an array, receive equal amount of writes. In the case of a flash array, this makes it very likely that more than one drive will expire around same time, causing entire RAID to be lost.

it's got certain amount of data writable on device and after that it goes to read only

No, in almost all cases, the drive simply disappears. Nothing can be read from it.
 
- is it worth creating raid array on nvme drives? I was using RAID 10 on 4 SSD drives and I was told it's absolutely pointless because SSD drives are more reliable than HDDs and everytime I am mirroring data I am just shrinking ssd life (it's got certain amount of data writable on device and after that it goes to read only)

Would you rather lose the entire raid array? And lose the data and the service?

- if it's still worth creating RAID 10 what is the correct way of creating RAID 10 during installation using ZFS?
In the guided installation it allows you to create, by default it is stripe, but you can change it.

- what is recommended swap space? My setup has 128GB of RAM and I am planning to expand to 384GB. Current setup is 2GB of SWAP space (guided installation) and it turns out I run out of swap space. I did have to create a file and mount it as swap

There is more than one thread on the forum talking about the topic, in theory "it must be twice his physical memory"

- how does it work if for example I will change my current server to a newer generation? Will system boot automatically if I move drives to different hardware? When I moved from Dell R430 to R730xd RAID controller asked me to restore array and system booted without any issues, what about when it comes to ZFS Raid?

If RAID 10 or RAID 5 is not recommended with my setup, maybe striping is an option? Not sure on how to or if it's possible on ZFS but this would speed things up I believe. I could use my SAS HDDs for some backup

You shouldn't have any problems importing the ZFS array, and why don't you use an HBA? ZFS works in RAID but from what I've heard it's not recommended, but it works.

If I were you, I would look for an HBA compatible with your server and with NVME devices and of course with FreeBSD.

I don't know where you heard or who told you that it is not recommended to create parity with Flash devices, but forget it. If you have SAS hard drives, you could use your RAM. And NVME devices to create cache and log mirrors, to increase processing speed.
 
Back
Top