Building a small, personal NAS

I'm planning a replacement for my home file server (low-end AMD running FreeBSD with 4x1TB HDDs in a RAIDZ). I'd like this one to be smaller, more power efficient and more robust. And obviously I'd like to have more storage space. I'm looking to build around the end of the year.

So here's my plan so far:

CASE: Lian Li PC PC-Q25-B
PSU: either reuse one I have or buy a SeaSonic SS-300ET
MOBO: Intel DBS1200KP Mini ITX Server Motherboard
CPU: Intel Celeron G540 Sandy Bridge 2.5GHz
RAM: 2x 8GB Kingston ECC Unbuffered DDR3 (not sure which model exactly)
RAID: Supermicro AOC-USAS2-L8i (to be used in IT mode)
HDD: 5x 3TB WD Red (to be used in RAIDZ)

For the memory, does 1066 vs 1333 matter for a NAS? Beyond that, there are still a number of DIMMs that appear very similar but vary in price by up to $30. Should I just get the cheapest ECC memory or is there a real difference?

Right now, I have the OS running off of USB flash drives (just the root partition; the rest is mounted from ZFS), but I might grab a small SSD if I see a decent one on sale. I don't think my needs require anything special for L2ARC or ZIL. Buying enterprise SSDs for those would break the budget anyway.

From what I can gather, all of this should have good compatibility with FreeBSD. The Intel board shouldn't be a problem and the onboard Intel NICs will save any potential Realtec headaches. The Supermicro SAS card is reported here and elsewhere to work well. I can't tell if it will come with IT firmware out of the box, but I don't have a problem with flashing that myself.

Anywho, I'm curious if this set-up sounds like a good plan. It's probably a bit overkill for my needs, but I'd like to build this properly so it'll last me a long time. Thanks!
 
You asked about memory clock on nas.. I can tell you that a higher clock like 1600mhz is always better than 1333mhz, it makes difference cbunn, you will notice that performance on every machine. Althought you have a good sollution too, that is : changing memory timings of that particular mem ram..
 
ahavatar said:
Does Intel Celeron support ECC? I guess not.

I wondered that as well, but according to Intel's own documentation, the G540 does support ECC on a C206 board.

From http://download.intel.com/embedded/processor/prodbrief/325980.pdf
Based on 32nm process technology and next-generation Intel microarchitecture codename Sandy Bridge, the Intel Celeron processor G540 features dual-core processing with Intel HD Graphics and Error Correcting Code (ECC) capabilities (when paired with Intel C206 chipset).
 
cbunn said:
CPU: Intel Celeron G540 Sandy Bridge 2.5GHz
If you're even remotely considering toying with virtual machines, e.g. headless emulators/virtualbox-ose, I'd definitely choose a CPU with more cores.

I wanted to suggest to have a look at the other CPUs that support new AES instruction set. This would not make sense though, as they cost about four times as much. (It is funny Celeron G540 is listed as having no ECC support.)

The setup looks very promising. Please, do share your experience once you have it set up. I'd be glad to see some benchmarks/bonnie++ benchmarks.
 
marwis said:
If you're even remotely considering toying with virtual machines, e.g. headless emulators/virtualbox-ose, I'd definitely chose a CPU with more cores.
Nope. Not remotely interested. I occasionally use VMs on my desktop, but I don't have any desire to do that on this server. It will do a little web serving, but basically file serving is all it needs to do well.

marwis said:
I wanted to suggest to have a look at the other CPUs that support new AES instruction set. This would not make sense though, as they cost about four time as much. (It is funny Celeron G540 is listed as having no ECC support.)
AES support would be kinda cool, but as you said, I'd need to spend a lot more to get it for only possible use cases. Plus, I think many of those CPUs are V2 chips which are not compatible with this board. As for the G540 listed as without ECC support. It is perplexing. On the one hand, I'd like to trust the PDF from Intel stating it, but then one never knows. What other means could I have to determine the truth (apart from trial and error)?

marwis said:
The setup looks very promising. Please, do share your experience once you have it set up. I'd be glad to see some benchmarks/bonnie++ benchmarks.
I'll be sure to do that. I'm clearly not the first guy to see this case as ideal for this purpose, but I suppose my collection of components is unique. As it happens, the Supermicro SAS card is on sale at Newegg now for about $112 (originally $140), so I might have to jump on that.

I'm not terribly familiar with benchmarking, but I'll definitely look into that tool.
 
freesbies said:
You asked about memory clock on nas.. I can tell you that a higher clock like 1600mhz is always better than 1333mhz, it makes difference cbunn, you will notice that performance on every machine.

Uhh...

No you won't.

The NAS is going to be IO bound waiting for disk access. You MAY notice some small performance difference in CPU/memory bandwidth constrained apps, but a NAS?

Nope.


And yes, I'd recommend a CPU with accelerated AES (if you plan on running encryption, if not don't bother), and if you're planning to run VMs, look for one with the VT-D instructions.
 
throAU said:
The NAS is going to be IO bound waiting for disk access. You MAY notice some small performance difference in CPU/memory bandwidth constrained apps, but a NAS?

Nope.

Also, the Celeron G540 is limited to 1066MHz RAM anyway.

And if anyone is wondering, I posted the question about ECC memory with the Celeron G540 to Intel's support board and was told that ECC support depends on the chipset and not necessarily the CPU:

http://communities.intel.com/message/172902#172902
 
Yeah, it sold out quickly. Even more so on Amazon. But now it's back on Amazon for $99 as a Lightning deal ($109 regular price). I think the WD Red is still the better long-term buy.
 
Several stores (Newegg, Amazon, B&H) are offering the Seagate 3TB for around $90 now. And Newegg has a deal for 20% off all server motherboards, but the one I want is temporarily out of stock. The Universe does not want me to save money on this build, it seems.
 
Think of it as the universe trying to guide you away from particular choices towards others. Maybe not for your benefit, but still...
 
You're putting the setup together for the next couple of years. Saving $100 would be nice but won't really make a difference in the long run.
 
marwis said:
You're putting the setup together for the next couple of years. Saving $100 would be nice but won't really make a difference in the long run.

That's what I keep telling myself. But the savings is more like $350 for five drives.
 
What's your data worth? That's a serious question. The drives used for disposable media storage can be a different grade than for archival data.
 
cbunn said:
RAID: Supermicro AOC-USAS2-L8i (to be used in IT mode)

Are you sure the UIO card works with a non-supermicro board, i.e. can be used with a generic PCI-express interface?
 
xibo said:
Are you sure the UIO card works with a non-supermicro board, i.e. can be used with a generic PCI-express interface?

Yes:)

It´s a little tricky though, the whole board is like "upside down", with the components facing "up", instead of "down", so you have to reverse the back-plate to make it fit the screw-holes on the back of chassis. Problem solved by using a few motherboard spacers.

/Sebulon
 
wblock@ said:
What's your data worth? That's a serious question. The drives used for disposable media storage can be a different grade than for archival data.

Well, none of the data will be irreplaceable. The bulk of the space will be taken up my music and movies, for which I retain the optical media. The really important stuff (mainly my photos), are also kept on my desktop, laptop and cloud backup (Crash Plan). So it's a mix of media and archival, but it's not mission critical by any means. I just like to over-engineer these kinds of things to remove headaches down the line and to make the system usable for as long as possible.

xibo said:
Are you sure the UIO card works with a non-supermicro board, i.e. can be used with a generic PCI-express interface?

The interface is PCIe, but the bracket is UIO. So the simplest solution is to remove the bracket and install the card without any bracket at all. But there are also aftermarket brackets that will fit onto these Supermicro boards. I don't think they include the LED holes, but I'll rarely be looking at them anyway.
 
@cbunn: Did you buy it yet ? I'm /still/ struggling what HW to buy for a home storage, your setup seems interesting..
 
matoatlantis said:
@cbunn: Did you buy it yet ? I'm /still/ struggling what HW to buy for a home storage, your setup seems interesting..

Nope, not yet. So far, I've bought the Supermicro HBA card and a couple fans, but that's about it. I've got some price alerts on the different pieces and I'm hoping for a sale here and there (which is what happened with the Supermicro card and fans). The main issue is the cost of the HDDs. If the price of those comes down a bit, I'll bite. Additionally, I didn't realize that 8GB DIMMs of ECC unbuffered RAM is so expensive. The cheapest I can find for a 16GB set is $125 at Amazon.

As I said before, I'm not in a huge rush to build. If no price drops come along, I'll probably bite the bullet in January. When that happens, I'll be sure to post here with the results. :)
 
Just as an update, the last of the necessary parts (SAS breakout cables) came in the mail today, so I'm putting things together today. I hope to have some pics up soon and then start installing FreeBSD and doing some hardware tests.
 
So I completed the build a few days ago and have been running some tests since then (MemTest, hard drive diagnostics, etc.). All the parts I mentioned in the first post didn't change. I got Kingston ECC RAM and some SAS cables from Monoprice (~$10 per cable). I bought a Seasonic 300ET so that I can keep the old server running for now. For CPU cooling, I used the stock Intel HSF from a Core i7 that I have in my desktop system with an aftermarket cooler because it has a copper core. I forgot how much I hate those pushpins. It seems quiet enough for my purposes. I replaced the Lian Li fans with Cougar fans. They're pretty quiet and have fully sleeved cables.

For the Supermicro card, my intention was to use an aftermarket PCI bracket since the UIO one doesn't fit and I didn't want the card to come loose. Well, it would seem that the PCI bracket expects smaller screws, because I had a hard time getting them in and one sheared off, making the bracket useless. I might end up getting another and drilling it out a bit, but for now the card is going commando.

A couple of things have been annoying, mainly to do with the BIOS on the Intel S1200KP board. It takes a long time to boot, for one thing. Also, it doesn't seem to honor the setting which would remove the Intel splash screen. And during that splash screen, I've had issues getting the BIOS to acknowledge me hitting F2 or F10 on the keyboard. I have to pretend I'm doing Morse Code for it to work, it seems. It also seems that the BIOS incorrectly reports the CPU temperature. When I check it in the BIOS, it has a high reading (56C+), but when I use coretemp in FreeBSD, it's more on the order of 48-49C. I can't imagine a reason why the BIOS would heat up the chip.

Something I'm not sure about is if I should expect to see a BIOS for the Supermicro card. I thought RAID cards usually had some firmware settings between the normal BIOS and the OS loading. But I don't see anything. It would seem that the IT firmware is already loaded, because the disks appear as stand-alone drives (da0-4) with no mention of any kind of RAID setup. So shall I just leave it at that?

As for the case, it's very well-made, but has an issue with the I/O shield. Most cases are thin steel, but this thick aluminum makes installing it difficult. I felt like I was going to break it.

Having said all that, I'm pretty happy with the results so far. The case, while difficult to work in, is very attractive and small on the outside. The hard drive bays are a very nice touch. Something I didn't see mentioned elsewhere is that it has a set of Molex and SATA power inputs for them, so you don't have to worry about which connectors your PSU has more of. All the drives seem to be working just fine so far (currently running a few passes of badblocks, then checking the SMART info).

Initially, after installation, I was having issues where the prompt would hang for a few seconds after some commands and during login (sometimes long enough for Putty to time out), but that seems to have stopped.

Are there any hard drive benchmarks I should run to see what kind of performance I can expect?

Because these are 4K drives that lie and report being 512b, do I need to do that Gnop trick when creating my ZFS pool? I'm not sure if anything has changed since I last saw that.

What's the best way to transfer the data? I see sysutils/zxfer mentioned here occasionally. Is that the best method for maintaining data integrity?

I'm at work, but next chance I get at home, I'll upload some pics.
 
cbunn said:
Something I'm not sure about is if I should expect to see a BIOS for the Supermicro card. I thought RAID cards usually had some firmware settings between the normal BIOS and the OS loading. But I don't see anything. It would seem that the IT firmware is already loaded, because the disks appear as stand-alone drives (da0-4) with no mention of any kind of RAID setup. So shall I just leave it at that?

Some cards have separate management programs. If you planned to use it this way, good enough.

All the drives seem to be working just fine so far (currently running a few passes of badblocks, then checking the SMART info).

There's not much point to looking for a list of bad blocks with drives made in the last two decades, which handle block remapping in firmware. Filling the drive with zeros with dd(1) has about the same effect. Of course, all of this is just an attempt to catch drives that are going to fail early, and it doesn't always work. I had a WD Red 1T that showed no bad blocks, yet had a severe error that prevented it from being used in a mirror and failed the short or long SMART tests from sysutils/smartmontools instantly.

Initially, after installation, I was having issues where the prompt would hang for a few seconds after some commands and during login (sometimes long enough for Putty to time out), but that seems to have stopped.

That doesn't sound good. Don't know what would cause it, though.

Are there any hard drive benchmarks I should run to see what kind of performance I can expect?

benchmarks/bonnie++ for a hard to read but mostly-realistic result, diskinfo -tv for a quick and absolute best-case result.

Because these are 4K drives that lie and report being 512b, do I need to do that Gnop trick when creating my ZFS pool?

Yes.

What's the best way to transfer the data? I see sysutils/zxfer mentioned here occasionally. Is that the best method for maintaining data integrity?

It looks like that's either for ZFS-to-ZFS (zfs send/receive), or just uses net/rsync. Which can be done manually, but maybe it knows the best combination of options.
 
Back
Top