ZFS based home storage

I feel like my gigabyte* 965P-DS3 is about to meet its destiny afters years of darkness...Any objections around?


*(@xibo: now you understand where is my typo error coming from)
 
WiiGame said:
And a curiosity: how low of "yesterday's" hardware do you think ZFS can run well on? (Think LGA775/DDR2 ballpark, not 486s.) Too advanced for a recycled box?

I'm running it on a Pentium4 box with 2 GB of RAM. Took a lot of tweaking and tuning to make it stable with my workloads. Has 2x mirror vdevs using 4 500 GB drives.
 
Thanks, friends. One small addition: Does anyone think 2 versus 1 network cards matters enough to make it worth it in a ZFS setup?
 
I will list my hardware specs in the hope of helping those here:

Athlon II X3
8 GB (2 x 4GB) of DDR3-1333
400W DiabloTek PSU
2 x 1TB Hitachi HDD's in mirrored configuration (7,200 rpm, 64 MB cache)
120 GB SSD (Sata II) as system drive
cd-rom drive
Cheap Diablotek Case
Cheap dvd-rom/burner

This was one of the TigerDirect combo specials. The base system was $280 after rebates. That included everything except the two Hitachi HDD's which were $99 each. Total was less than $500 (barely) after taxes. It's used as a home file server which serves over the network with Samba and provide remote access with ssh/sftp. Most of the time, it sits idle and the cpu spins down to the point that none of the fans need to be running. It is more than able to saturate my home gigabit connection between my desktop and my file server without breaking a sweat.

I had to buy a new box because I didn't have one to recycle that was lieing around. I went for the cheapest modern hardware possible and it still is extremely overpowered for my use case. What do I think the minimum system requirements for my use case? Probably a single core processor circa 2005 and 2gb of DDR RAM. I actually had an old desktop that was an Athlon64 3700+ (2.2 Ghz San Diego single core) with 4GB of DDR-400 (4 x 1GB) that would have been a perfect box to recycle for this purpose. Too bad I got rid of it a year ago (11 months before I felt the need to build a zfs fileserver).

Just my thoughts and unscientific opinions.
 
WiiGame said:
Thanks, friends. One small addition: Does anyone think 2 versus 1 network cards matters enough to make it worth it in a ZFS setup?

Will depend on your workload (number of clients/type of access) and type/number of VDEVs in your pool.

If you have gigabit ethernet, and a fairly random workload (i.e., multiple clients) then I doubt spinning disks will keep up unless you have a LOT of them. If you're using SSD, I suspect you'll saturate gig-e no problem.

If you're doing large sequential reads, and you have enough disks to saturate the network, then of course multiple NICs will help.

Also, if you have 2 NICs in your server, obviously it will only be able to supply data at >1 gigabit if you have enough client machines hitting it simultaneously with 1 gigabit or faster - or a single client with multiple gig-e NICs. In a home situation, I doubt you'll have multiple NICs in your client machines, and probably not enough machines hitting the box continually to matter.

Bear in mind that setting up multiple NICs for faster network throughput may require configuration of your switch ports. A dumb switch may not help you do link aggregation...
 
I'm running it on a Pentium4 box with 2 GB of RAM. Took a lot of tweaking and tuning to make it stable with my workloads. Has 2x mirror vdevs using 4 500 GB drives.
Do you have any tips on making a skeleton NAS using ZFS? The current wisdom is that you're going to use ZFS the system should have 8 GB of ECC RAM.
 
This post is 8 years old so hardware choices have changed.

I use 64GB ECC DDR4 in my rig. I would consider 16GB to be the minimum RAM for a ZFS box.
My 24bay chassis has 16 drives that are 500GB in 2 vdevs. I also have a separate zpool with 8 drives of 16GB SSD's for speed.
I recently added 2 NVMe for slog and arc.
The speed of ZFS isn't so great and adding more spindles did not help much. I only get around 200MB/sec with the 16 spindle zpool.
So the speed of a single drive is approximately my speed. This is without any tuning.

To me a skeleton NAS would entail a ZFS array and setup NFS for network shares.
So you need to figure out how you plan on sharing your files.
iSCSI, Samba or NFS.
Also decide on how you want your OS mounted. We have 'zfs on root' or you can use a small 'disk on module' with UFS for the OS.
For me a SATADOM made a good choice so I could keep the ZFS pool separate from my OS. I do backup the OS to the zpool.

My first go around I installed Webmin for a Web GUI. This time around I am using only the cli.
 
While I don't disagree that a 64-gig node with 24 disks and a couple of SSDs is a nice setup ... my home ZFS server has one 64GiB boot SSD (still booting from UFS, not for any particularly good reason just that's the way it was installed long ago), two spinning hard drives (mirrored in ZFS, even though one is 3TB and the other is 4TB), and 4GiB of memory, of which only 3GiB is actually usable, since it is still a 32-bit machine (Intel Atom). No ECC. All administration is via command line. Works fabulously well for my meager requirements.

Would I recommend running ZFS without ECC, or with that little memory, or on 32 bits? No, I would not. But if you are a little careful and have patience, it works well.
 
This post is 8 years old so hardware choices have changed.
I'm aware of that and have made more relevant hardware choices.

It would be beneficial to know what tuning and tweaking needs to be done for a skeleton NAS using ZFS for disaster recovery purposes. There are times when you may do disaster recovery using stone knives and bear skins as Mr. Spock did in ST:TOS to construct a mnemonic memory circuit. I have one older system here that doesn't support > 4 GB of memory and only supports SATA I. I've had to use it for disaster recovery for a desktop.
 
Back
Top