Other How would you build your own NAS?

I built my NAS with a full size Antec tower with six drive bays.

My SuperMicro board supports six SATA including the optical drive so I have six WD Red drives in ZFS2 config and booting from a thumb drive.

I had the six in stock, but in retrospect I should have used larger drives for this movie server, as I am already out of space.

XigmaNAS V12.x works perfectly so I never upgraded to newer versions.
 
Commercial NAS systems have compact cases, but their CPU does not allow to install the OS you want. And they are expensive.
You can buy consumer-grade used NAS boxes like ccammack mentioned for relatively fair prizes. A few days ago I bought the predecessor of that box that ccammack points to (used) and AFAICT it's semi-professional hardware. No ECC-RAM, no battery-backed storage controller, but all else is more than reasonably good quality; the case is of excellent quality, 2.5 mm alloy, I can stand on it (65 kg).
I would be happy to have a "2 bay docking station" with power supply and 2 esata outputs, no nas,
just connect through sata when I want to save something. But does that exist?
These devices are called DAS Direct Attached Storage. I search with the words "2-bay HDD SATA plug" on a well-known internet auction platform and get a plethora of results. Whether you're looking for new or used equipment, there's a huge selection in both cases.
 
I have such a box for attaching 2 SCSI devices, but not seen something analogous for SATA.
The boxes I see are to be connected through USB to the computer.
Ya, because USB 3.x nowadays is the common interface to transport virtually any lower protocoll. I have dark memory that there exists a physical plug & interface to connect SATA external (miniSATA?), but I guess it's rarely used. USB 3.x should usually be fine as it's fast enough for SATA III.
Nice to see you back!
Thx to you and all others for your welcoming greetings! I'm happy to be here again & will bug you with my quirky ideas, silly comments etc. pp. ;)
 
Last edited:
In this thread, I think we're far too much focused on the enclosure (case, power supply). In reality, a good NAS distinguishes itself by using server-class internals (reliable storage devices, fault-tolerant RAM, and so on). But mostly by running good software. The definition of "good" is a tricky question and depends on the preferences of the user. Do they mean easy to manage? To they mean performant and efficient? Do they mean powerful?

I have dark memory that there exists a physical plug & interface to connect SATA external (miniSATA?), but I guess it's rarely used.
It's called eSATA. I have both a PCI card and an external enclosure at home, but no longer use it. The connector looks similar enough to a regular SATA connector that I sometimes confuse them. Look for the bigger plastic, and metal bits on the outside of the connector. I used to use it for an external (portable) backup disk that was usually connected to my server. It didn't work terribly reliably, and I couldn't easily find a long enough cable, so I ditched it in favor of USB-3, which has been flawless for me.
 
I have such a box for attaching 2 SCSI devices, but not seen something analogous for SATA.
The boxes I see are to be connected through USB to the computer.

SFF-8088 Made for SAS, works for SATA as eSATA as long as 1) the controller supports SATA and 2) the enclosure is not overly "smart" and requires SAS

Example: Stardom ST8-U5 (I have this device, works great for SATA)
 
Example: Stardom ST8-U5 (I have this device, works great for SATA)
I see there place for 8 Disks, but two outputs.

Is SAS like SCSI, a BUS? As far as I know SATA is not, then 8 outputs would be needed.


In this thread, I think we're far too much focused on the enclosure (case, power supply).
Yes, and exactly that was the theme of the thread, from the beginning.
What you mention is of course very important, but perhaps theme for other thread.
 
Is SAS like SCSI, a BUS? As far as I know SATA is not, then 8 outputs would be needed.
Yes and no.

At one level, SAS is a point-to-point link. It goes from one thing to another thing. The first thing in a chain is in practice always an HBA. The second thing can be a disk drive; if you use it this way, SAS is like SATA, as far as topology is concerned. The second thing can also be an "expander", which takes that one SAS link and multiplexes it towards many outputs. Expanders are often found on SAS backplanes: One to four links coming from one or two computers (two for multi-path, two for multi-initiator for failover), and then anywhere between a few and many dozen outputs for disks. Expanders are often combined with enclosure controllers: those are the things that check whether a disk is physically in place, turn power on/off for the disks, and control indicator lights. Putting all this together, expanders/controllers can be highly complex and large chips, with many dozens of ports.

Interestingly, enclosure controllers are themselves SCSI devices. So an expander can have ports that real physical SCSI devices (namely disk drives) connect to, while also being a SCSI device itself. Expanders that don't have controllers are purely SAS things.

Making things more complicated are "octopus" cables. SAS cabling has standards for a single serial link, and then for 4 and 8 links together on one connector. An octopus is for example a cable that has one 8-way connector on the HBA end, and then 8 individual for 8 individual disk drives.

So in practice, if you squint and ignore the details of expanders and octopus cables, SAS is indeed a bus. But it is also a point-to-point link.

Have I confused you sufficiently?
 
My home NAS is a tiny box from 2012 that's still chugging along on an Atom CPU. Sure it's slow as heck but it's just for backups and other low-usage data so I don't really care. It has 6x4TB spinning metal and a USB-stick in there that boots the OS. The OS is configured to avoid writes to the USB so I won't be destroying the flash chip. I had to replace the spinning metal over the years but other than that it's been rock-solid.
 
Easy and cheap
Zfs with sata hard disks 2TB > in mirror mode
For local network shared files NFS or Samba if you have "special" android tv boxes that have problems with NFS
 
I went the rather cheap route:
  • used Fujitsu TX1310 M3 (Tower Server)
  • 2 x 16 TB drives (ZFS mirrored)
  • 1 TB NVMe SSD
  • old drives for regular backups (ZFS)
The server has four 3.5" slots; and when I need more storage, I can probably remove the optical and tape drives and cram in a couple more. Only downsides so far:
  • The server is not completly quiet, there is a bit of fan noise
  • you need a PCIe to M.2/NVMe adapter - there is an undocumented M.2 slot on the main board, but I couldn't get it to work
  • The drives are not hot swappable - but I guess zfs export and then camcontrol standby should be good enought to swap the drives without powering off the server.
Works quite well so far.
 
Maybe off topic, but it might belong more into this thread than in mine where the title suggests it's only about /ROOT/default on USB thumb drive vs. internal NVMe SSDs.
I'm finghting with the problem that I'd like to set up ZFS cache, zil/slog and special support vdevs on the faster NVMe devices for 2 storage zpools on the slower HDDs. Since I don't know in advance how big these support vdevs need to be -- I can only guess -- I did not want to put these on partitions, and put them on ZVOLs on another zpool on the NVMe SSDs instead. But the ZFS Best Practices Guide tells:
  • Additional Cautions for Storage Pools​

    Review the following cautions before building your ZFS storage pool:
    • Do not create a storage pool that contains components from another storage pool. Deadlocks can occur in this unsupported configuration.
On the genuine topic ("How would you build your own NAS?"), I'm perfectly happy for decades now with buying used equipment of reasonable quality. Usually the quality I get is somewhat higher than more up-to-date low-end consumer hardware (e.g. I buy so-called business line laptops, or consumer-grade NAS that were in the upper 33% of their class 4-5 years ago), and the performance/money is usually much better, too. IIMHO in most cases even 10-15 year old hardware can serve a home network, if it's of better than low-end quality.
 
Each plug carries four SATA/SAS links at the same time.
You connect one in the HBA and get 4 devices in /dev ?

Your box treat the SATA disks as were a SAS disk more, but respective protocol?
Can I always connect a SATA disk in the place of an SAS one?
 
You connect one in the HBA and get 4 devices in /dev ?

Your box treat the SATA disks as were a SAS disk more, but respective protocol?
Can I always connect a SATA disk in the place of an SAS one?
Yes, if there are four disks plugged in on the other side.
It treats it as a disk attached to the HBA, which will have features appropriate to its type (SAS or SATA)
Typically, unless the HBA says otherwise.
 
You connect one in the HBA and get 4 devices in /dev ?
Yep. I've salvaged a bunch of LSI SAS cards (various types mpt(4), mps(4), etc). Most of them are 8i ports, in other words 8 internal ports so can handle 8 SATA or SAS drives, or even more if you add an extender. Older cards aren't the fastest obviously, but good enough for storing a bunch of media files.

Code:
# mpsutil show adapter
mps0 Adapter:
       Board Name: SAS9207-8i
   Board Assembly: H3-25412-00K
        Chip Name: LSISAS2308
    Chip Revision: ALL
    BIOS Revision: 7.39.00.00
Firmware Revision: 20.00.02.00
  Integrated RAID: no
         SATA NCQ: ENABLED
 PCIe Width/Speed: x8 (8.0 GB/sec)
        IOC Speed: Full
      Temperature: 94 C

PhyNum  CtlrHandle  DevHandle  Disabled  Speed   Min    Max    Device
0       0001        0009       N         6.0     1.5    6.0    SAS Initiator
1       0002        000a       N         6.0     1.5    6.0    SAS Initiator
2       0003        000b       N         6.0     1.5    6.0    SAS Initiator
3       0004        000c       N         6.0     1.5    6.0    SAS Initiator
4       0005        000d       N         6.0     1.5    6.0    SAS Initiator
5       0006        000e       N         6.0     1.5    6.0    SAS Initiator
6       0007        000f       N         6.0     1.5    6.0    SAS Initiator
7                              N                 1.5    6.0    SAS Initiator
There's 7 disks attached, 4 x 3TB RAID-Z and a 3 x 1TB RAID-Z. They're all good old fashioned 5400 RPM spinning rust SATA disks.
Code:
# mpsutil show devices
B____T    SAS Address      Handle  Parent    Device        Speed Enc  Slot  Wdt
00   16   4433221100000000 0009    0001      SATA Target   6.0   0001 03    1
00   17   4433221101000000 000a    0002      SATA Target   6.0   0001 02    1
00   15   4433221102000000 000b    0003      SATA Target   6.0   0001 01    1
00   19   4433221103000000 000c    0004      SATA Target   6.0   0001 00    1
00   14   4433221104000000 000d    0005      SATA Target   6.0   0001 07    1
00   10   4433221105000000 000e    0006      SATA Target   6.0   0001 06    1
00   11   4433221106000000 000f    0007      SATA Target   6.0   0001 05    1
Those old LSI (now Broadcom) cards are rock solid, and you can probably pick these up second hand for peanuts. Those PCIe x8 cards will work fine in a PCIe x16 slot. So you could stick one on a mini-ITX mainboard for example.
 
I see that most consider the best solution that, what I have at the moment: a tower.
A solid metal tower is the most versatile thing. If you want hotplug, some icydock or such should fit in.
But if you live in a lush appartment and care for design, something else is needed.
Otherwise the critical question is how much compute power might be desired: E-ATX or not.

For my part, I have a standard ATX high tower bought in 80486 era, and didn't yet see reason to replace it. It now accomodates a dual Xeon-EP, 14 spinning 3 1/2", 8 SSD and 11 fans.
 
You connect one in the HBA and get 4 devices in /dev ?

Your box treat the SATA disks as were a SAS disk more, but respective protocol?
No, it should treat sata as sata, with the difference that they will appear as daX, not adaX. The commands are all sata, but the controller/HBA may apply it's own intelligence, so with cheap consumer devices there might be slight incompatibilites (these are not designed for each other).
 
I'm finghting with the problem that I'd like to set up ZFS cache, zil/slog and special support vdevs on the faster NVMe devices for 2 storage zpools on the slower HDDs. Since I don't know in advance how big these support vdevs need to be -- I can only guess -- I did not want to put these on partitions, and put them on ZVOLs on another zpool on the NVMe SSDs instead.
I wouldn't do that. This would work if the kernel code would be stupid and straight-forward. But in fact it is highly optimized to touch (and specifically, copy) a data block as rarely as possible. So what does your data block do: it gets fetched and put into ARC, and then managed there, and written back to disk as necessary. But then the "disk" isn't a disk, it is a zvol - which again wants to manage that data block within the same ARC! Now is that a separate copy or isn't it? I don't know - and I don't want to know, I just wouldn't do it.

ZFS cache is no problem, it can be destroyed and recreated.
ZIL should also be possible to get lost and be recovered, or renewed, only there is a bit more complaints on the path.
Special is the real issue, because that is part of the pool's base data, and loosing it means loosing the pool.
But special should be mirrored, so you could move one instance of the mirror to a larger accomodation, resilver, then do the other instance, and then expandsz.

That is the way I would go. (And as usual: test the procedures on a stick before doing them live.)
 
I used the Fractal design NODE 804 case which fits 8 3,5" HDD's and 4 SSD's . This allows you to build a 7 HDD VDEV with RAIDZ2 layout which maximises troughput and space yield, with a SPARE HDD drive and mirrored cache device and mirrored ZIL device using the 4 SSDs. This was a couple of years ago so it has a ASUS MicroATX Motherboard with a XEON cpu and 64 GB ECC memory. Presently it runs the last version of TruNAS core based on FreeBSD 13.
 
  • Like
Reactions: mer
Back
Top