ZFS based home storage

Hi guys,

I'm trying to build a home NAS. I saw some threads over the internet, but .. I think I'm more confused than I was before. I've a feeling this is kind of off-topic, so I rather put it here instead to the HW section. If you have this kind of storage at home, please do share your experience.

My goal: build ZFS based storage with ~ 9TB free as quiet as possible (passive cooling + passive PSU ?) keeping the power consumption at minimum. If possible made it as a home router too (1Gbit LAN to 40MBps uplink).

It seems it's impossible to meet my demands. If I set the highest priority to silence - what board will suit my needs ? Is it possible to power 3 x 3TB disks + aux on it (ATOM based)? If not and I need to buy something more powerful - what about power consumption?

I want to ask somebody who built this storage to share his (her) experience what _not to buy and/or what to avoid in general.

Thanks.
 
My storage requirement isn't as large as yours, but I run a home ZFS storage server. Cool and quiet is easy to achieve. Get a decent quality fan to put on the CPU (stock fans are usually designed to be cheap, not quiet), a high-efficiency power-supply with at least 350+ watts and you will be good to go.

Note that ZFS requires massive amounts of RAM to run properly due to the ARC. The more RAM you have the better.

If you plan on using deduplication, you should plan on at least 2GB of free RAM per Terabyte of available storage in order to have an adequate ARC. If you are doing a mirror, you'll have 3TB of available storage space, so your system, in theory will need around 6GB of RAM. If you are doing a Raid-z, you'll have approximately 6TB of available storage and 3TB of parity, needing 12GB of RAM. A striped ZFS array (i.e. Raid-0) is going to give you 9TB of available storage and require 18 GB of RAM. Given the exponential growth, it's usually more efficient to use compression instead of deduplication to save on space.
 
@vermaden Thanks for the tip on PICO PSU - it does sound interesting.

You can go for dual-core Atom system, which should 'take' 3 x 3TB without a problem.
That's the point - can it? I read different things and that's why I'm looking for somebody to verify from his/her experience. I heard that is just too much for Atom to handle.

@rajl I will be using simple raidz - it's basically for my movies, mostly serials (sitcoms) I have. Thanks to Apple (r) I don't need to care about music storage (nope, not a commercial break ;) ).

I will try to get the ECC RAM, but the problem might be with motherboards - these small factor ones usually don't support it. And I don't know any fan which is quiet - all of them are kind of annoying. That's why going passive interests me a lot.
 
My home server runs on Intel i7 and 12GB RAM. Uses raidz2 with total of 12TB disk space. The reason I used strong processor is because server always runs complex networks in dynamips. I also run AES-256 geli encryption over all disks. There is some zfs compression as well.

Say I want to make pure media server with all those disks and features on new processor (and leave dynamips on i7). Would those Atom processors suffice for zfs and geli?

p.s. I'm not trying to hijack thread, maybe some of this is useful to OP as well.
 
If you are looking for a good balance of power and performance, I'd recommend a Bobcat/Llano solution. Atom is power efficient, but it's still an in-order execution processor (although I think the roadmap from Intel has out of order execution on the "to do" list).

ZFS really hits the ram hard, so I would focus on that first. You can probably get away with non-ECC RAM (and save a boatload of money), given ZFS's builtin checksumming. After RAM, you need a decent processor. AMD's Llano based boards are probably the sweet spot for your uses. Atom will probably be too slow, but you don't need an i7 either (unless you're doing 256-bit disk encryption on top a compressed file system that is combined with deduplication that also calculates parity bits in RAIDz and.... you get the idea).

If you're keeping it simple, a lot of RAM and AMD Llano should work (or maybe Intel Atom). Intel Pentium or Core i3 or an AMD Athlon II will definitely be more than enough if you're just doing an at home file server. Intel Pentium and AMD Athlon II are exceptionally cheap solutions, so you might want to consider them.
 
rajl said:
given ZFS's builtin checksumming.

@rajl: You got it the other way around - ZFS checks what's in RAM with what's on disk. But if those data are wrong in RAM and correctly written, you are screwed. Checksum would be OK, data not. AMD solution might be interesting, but from what I know they're more problematic when it comes to heat and high temperature in general.

I already have a small ZFS server in datacentrum - it's 4TB raidz + 160GB ZFS mirror with 8GB RAM. It also hosts two virtual machines, cca 2GB of memory is used for them. Works just fine.

@bbzz: Well, hopefully this thread won't get hijacked .. I use dynamips for 8 years or so. Though you can compute the idlepc value, once you set your network with let's say EIGRP,OSPF,BGP - things get interesting. When I had weaker computer I did increase the hello timers value to avoid flapping routes. I wouldn't bet money on ATOM to be able to handle a lot of those cisco instances.
 
I'm using a Xeon E3-1260L and 16GiB Memory (I chose to go with the more expensive enterprise hardware because I've been disappointed a lot by the desktop hardware lately and I was able make a deal that rendered it only little more expensive) for 4x2GB zraid (3+1). Assuming you use reiser4 checksums and gzip9 compression, and an Atom has 1/10th of the Xeon's performance, you should get somewhat acceptable reading speeds, but writes will be horrible. Deduplication should be rethought well if not enough memory can be expended, unless you want to feel the floppy speed nostalgia (it's alot faster with a sequence of mirrors than with zraid, but it's still bad). I can't comment on GELI because I never used it.

Some time ago I was running an dual-core Atom (D510) with 4GiB memory and two Hitachi Deskstars in linux mdraid-1, and the sequencial disk access speeds were 50-90MiB/sec (no compression, ext4fs, some tuning I don't remember any more). However, when accessing it via NFS, I couldn't get more then 10-15MiB independent of HDD specifics (access to tmpfs was no faster) even after alot of NFS tuning, while I got 25-30MiB/sec via FTP. FreeBSD's NFS and IP implementations are more efficient then linux's ones (or at least it feels to me), but if you want NFS access I would recommend to stay away from Atoms nevertheless.

In fact, after reading this article, I would recommend to stay away from Atom to begin with, as a personal NAS will have only burst load and be idle the remaining 99% of the time, while an enterprise NAS will be under too much load for an Atom to handle. Also, 1155 boards can usually take alot more memory.
 
I myself have and would recommend a AMD Athlon II X2 because its cheap, quite fast and supports ECC. However if you want get ECC working it also requires ECC support by the motherboard. When I bought my NAS hardware I bought a motherboard with a 760G chipset, this way I also don't need to buy a graphics card. the 760G chipsets usually come with 6 sata ports, this gives good opportunities for Raidz or Raidz2.

You can also install the OS on a memory stick if you want to save sata ports for storage. If you want to avoid some work or just don't know to much about how to setup a NAS you can always use zfsguru or freenas.

Myself I've an AMD Athlon II X2, 4GB RAM (probably should get more..) and 2 Raidz's, one raidz with 6x 1TB and one with 6x 2TB. Usually its the receiver hard disk that is the bottleneck, however I do get some nice 80-90Mb/s when I use my laptop which has a good hard disk. And a scrub can take a night, even if its several TB.

I've got 5 fans, a 650W PSU and 13 hard disks. I can assure you that it is the 13 hard disks that makes the most noise. And I sleep 2m away from this thing. =.=
 
@xibo: What MOBO are you using? E3-1260L - that's gonna need a bigger FAN to keep it cool, doesn't it?

@Bobbla: 650W PSU? Isn't that just too power greedy? Not to mention that 650W PSU + 5 FANS - that must do a lot of noise :/ I'm ok with setup of it itself (still deciding whether I'll use FreeBSD or OpenIndiana), HW is something I seek help with.

As I said, maybe my demands are just not feasible. It simply cannot be greedy on power and has to be as quiet as possible. It's questionable what is a good performance for home usage though.
 
matoatlantis said:
@xibo: What MOBO are you using? E3-1260L - that's gonna need a bigger FAN to keep it cool, doesn't it?

@Bobbla: 650W PSU? Isn't that just too power greedy? Not to mention that 650W PSU + 5 FANS - that must do a lot of noise :/ I'm ok with setup of it itself (still deciding whether I'll use FreeBSD or OpenIndiana), HW is something I seek help with.

As I said, maybe my demands are just not feasible. It simply cannot be greedy on power and has to be as quiet as possible. It's questionable what is a good performance for home usage though.

What do you mean power greedy? 650W would be maximal output. Just make sure you get a power efficient one, i.e 80+. As for the noise, define noise? I use 5 quality fans. Relatively "load" low frequency hum is in fact enjoyable. My laptop on the other hand has one clogged fan that makes high frequency low volume sound and would make you want to beat the crap out of it.
 
I find any noise from PC/notebook annoying. I plan to put it behind the couch in the living room but still, that sound bugs me.

I asked whether it's greedy or not, I don't know. 650W means maximum but as they are not 100% efficient I was wondering.

It's just when you can buy a simple NAS with 2x SATA disks which has active consumption 18-20W and here you present 650W PSU - it confuses me (I'm not saying it cannot be true, it just confuses me).
 
I'm using a SuperMicro X9SCM-F, a 250W power supply, and have originally been running with a 2HU stock active heatsink that was good enough but noicy. Without plugging in the fan of the heatsink of the stock heatsink the CPU could run pretty well on 'normal' conditions but overheated after about 10 minutes of full load (cd /usr/src/ && make -j12 buildworld), and kept running at around 50-60 deg Celcius if I disabled two cores. Since the board is in a "tower" and not a rack I installed a larger and more expensive heatsink which is both good enough to keep things cool w/o fan and silent even if the fan is running so I chose to have it running for redundancy with the tower fan.

If you want to go passive use a i3-*-T or e3-1220L which are 35W and 20W respective (which also is their max power consumption under full load, not the average).

You should keep in mind that there are 5-inch fans running with less then 20dB, while hard disks are usually more noicy, and it's not only the CPU/GPU that cause heat - the HDDs also do, and they will probably keep doing more heat then your CPU to begin with: HDDs take about 5W on idle, while CPUs take around 3W on idle, however you have 1 CPU and multiple disks...

I agree 650W is alot. However the inefficiency should be (R) limited, as the 80% is accounted for the power actually used, not the maximal possible.
 
I run a Freebsd+ZFS NAS at home, based on the HP ProLiant MicroServer.

It's a small form factor server that can take four SATA disk drives. It has an Athlon II Neo N36L dual-core CPU which is a low power CPU similar to Intel's Atom. It can also take up to 8GB of ECC RAM.

It has one large fan in the back of the chassis and runs almost silently. The PSU is 150W.

I have four 2TB disks in mine, configured as a raidz1 pool, giving about 5.4TB of usable space. It's handled everything I've thrown at it just fine, saturating the gigabit network connection when transferring files over SMB/CIFS.
 
I feel just a little bit puzzled by the amount of RAM and processing power suggested all over here for a standard NAS.

Beyond subjective facts or just recommanding one's hardware that brings author's satisfaction, can't we dig deeper in a sense of establishing the basic
equations that help to identify the hardware requirements assuming:

- This framework is independent of the storage required (would depend on the number of SATA connectors of the motherboard for instance).
- Focus is on the speed only assuming a throughput within what could handle a standard gigabyte ethernet card.

More specifically, is there anything specific to zfs formating leading to something more complex than:
Maximum throughput = FSB clock x Number of transfers per cycle x Bus Width?
 
@Toto: Well, the problem is: what does one picture under 'home storage' specification?

I most certainly won't need deduplication - that is something very resource consuming providing very little (if any) benefit for me. I'm confident amount of RAM is not an issue here.

There's no problem in choosing 'good enough' HW to satisfy my performance expectations; it starts to be interesting when it has to be green and silent though. I still didn't decide what to buy, I'm leaning toward waiting for Intel's IVY bridge.
 
A desktop, private NAS and even small or medium sized enterprize backbone doesn't have a Gigabyte ethernet bandwidth. They have a Gigabit, which is 10 times slower.

For a ``normal'' NAS that you don't have any specific expectations of, a normal 100BaseT ethernet card will do, as well as JBOD and other default configurations you get in a consumer NAS ``box''. However you won't run zfs on that one. In fact they're not even configured with having anything else then Windows Home Server 2003 in mind (like they don't ship drivers for Windows 2008).

ZFS is being sold as a file system of the next generation, but in the first place it's a file system targeting dedicated systems in large-scale enterprises. Therefore minimizing hardware requirements has been optional from the start. In fact, you don't need an effective-current-generation-workstation machine to run ZFS, but you shouldn't expect it to perform well if you do ZFS on an Atom. Also, one of the killer features of ZFS that everyone would like to use (if it wasn't all that expensive in terms of hardware) is deduplication, which needs more then 1GB of memory for each TB of storage. Some lesser features are transparent compression (again, expensive in terms of hardware usage) and block-level error-correction (less expensive, still a bottleneck if you have a slow CPU).

If you don't care for the any of those, you won't need expensive hardware like you won't need ZFS - FFS (+GVINUM) can do the storage job quite well, too, and in many cases it can do it faster.

The other thing I was talking of is NFS, which puts considerable load to the server (can be tuned somewhat) and also runs in kernel mode on top of that, slowing all your userspace processes on the NAS. Again, if you say you don't need NFS and FTP is good enough, your hardware requirements can once again be reduced.

Btw. the FSB and memory timings are mostly irrelevant for a NAS, since the caching is done on the client.
 
Little late.. but meh :)
  1. When a PSU is rated 650W it means that it can deliver 650W, it will draw more from the power outlet depending on the efficiency.
  2. ZFS needs a lot of RAM, but no worries as RAM is cheap at the moment.
  3. ZFS without any fancy functions does not need a lot of processing power.
  4. Multimedia is almost ALWAYS already compressed, so no need for compression.
  5. However if you are going to re-silver or scrub it might be a good idea to have some capability. But no worries, even the cheapest CPU's today will probably work as long as it is NOT any cheap ass ATOM or equivalent.
  6. Sure scrub/re-silvering will take time depending on how much data you have and how fragmented it is, but fear not. It can run while you sleep at night, or at work?
  7. The difference between bit and byte is 1:8, not 1:10.
  8. I have been close to saturating a 1Gb Ethernet connection all the times, but HDD on my others computers are always the bottleneck.
  9. Dedicated graphics card for server is a waste of money.
  10. Something and stuff..

Eh, my 650W PSU drives 13 HDD and everything else. I bought it at a local shop because my old PSU died when I really needed access to the server. And my fans are also regulated, DOWN from super über awesome speed to low enough that they are no longer the main noise source.

There was probably more, but I have forgotten.

Also, when will the dynamic block pointer appear? WHEN? If you wonder about how much wattage you might need this will give you a hint: http://extreme.outervision.com/psucalculatorlite.jsp
 
Bobbla said:
Little late.. but meh :)

Not late at all - I didn't mark this as Solved - it's far from that. A lot of information was shared, but I'm still not decided what to buy or what might be the best setup for my needs.

Anyway, thanks for the link, it might come in handy.
 
Hi,

I will share my experience because I've done a project like this.

My configuration is the following:
  • MB: Supermicro X7SPA-HF
  • CPU: Integrated Atom D510
  • RAM: 4GB DDR2
  • HDD: 4 x 2,5" wd 320GB in raidz1 + 1 3,5" 80GB for the system
  • Case: Apex MI 100
  • PSU: Be Quiet SFX POWER 300

A quick test gave 90MB for write speed and 110 MB for read spead. I have to tell I haven't done any tuning to this config. It is essentially used as a file server with some jails running.

I read someone told that atom based configs are to be avoided. I think it all depends on the needs you have and the option you enable.

Ah, and forgot to tell, that the box is running 24h/24, is at 2,5m near my bed and is very silent. Sometimes, I wonder if it is running :D
 
@Toto, I find myself falling in love with this thread, as well. Not enough current talk like this out there (that I could find).

Bobbla said:
I've an AMD Athlon II X2, 4GB RAM (probably should get more..) and 2 Raidz's, one raidz with 6x 1TB and one with 6x 2TB.

Wow. Cool. What are you hooking all those HDDs to? Obviously more than the mobo b/c you have 6 SATA ports. Keeping any of it external? Or if all internal, what case are you using? Your additional hardware setup could be interesting to those considering many drives.

Also, did we come up with a verdict on the relative value of ECC RAM? There was some back and forth up there; was wondering if this jury weighs heavy on one side or is split.

And a curiosity: how low of "yesterday's" hardware do you think ZFS can run well on? (Think LGA775/DDR2 ballpark, not 486s.) Too advanced for a recycled box?
 
Back
Top