Your valuable feedback is welcome, as well as links & hints to reasonable hardware (or parts).
When researching for (refurbished/used) pre-built systems or (preferably used) parts to build my own homegrown SOHO microserver, I was shocked that most offerings do not fulfill even my most basic requirements and violate basic principles of engineering. Therefore, I'd like to share some thoughts. Let's begin with some
Basic physics & conclusions (left/right and "No, there's no alternative facts to this, because there's no alternative physics")
The above thoughts are all guided by emphasising reliability and environmental friendliness: if a fan fails, a system in a "naturally cold" case will have more time for a controlled shut down. Re-using an existing case reduces waste, and avoids energy consumption to manufacture a new one. The form factor miniITX allows for one extra PCI card, miniDTX allows for two, but many manufacturers solve the need to allow for two PCI cards by using a riser card in an mITX board, which is perfectly ok IMHO.
When researching for (refurbished/used) pre-built systems or (preferably used) parts to build my own homegrown SOHO microserver, I was shocked that most offerings do not fulfill even my most basic requirements and violate basic principles of engineering. Therefore, I'd like to share some thoughts. Let's begin with some
Basic physics & conclusions (left/right and "No, there's no alternative facts to this, because there's no alternative physics")
Warm air flows up (if surrounded by colder air & not artificially blown or sucked otherwise). | A fan working against this natural air flow needs more energy. Instead fans should amplify the natural air flow. |
Colder air can dissipate more heat than warmer air. | If the fan fails, parts likely to overheat quickly are: - the upper HDD's if placed horizontally (in a stack), - the upper components of a motherboard placed vertically. |
The lifetime of electronic devices relates to their operating temperature. It reduces in a hot environment. Thermal problems are the main source of failure for HDDs. | When the system is idle, but HDDs keep spinning and the fan goes off, the upper HDDs are warmer if placed horizontally (stacked). This holds true for non-rotating devices (SSD). Thus the upper ones are more likely to fail. |
Holes in the vertical sides of the case can not accidentally be covered easily (e.g. a manual placed on top). Holes in the bottom or lower vertical sides allow fresh, cold air to flow in. | Holes in the upper (preferably vertical) sides of the case allow for hot air to flow out naturally, even when the fan fails or is off. |
The PSU (power supply unit) is a source of heat. | Thus, the best placement is: horizontal MB below vertical HDDs, PSU beside or at the top or external. Experience shows that a vertical MB is fine in most conditions. |
Heat conductivity: aluminium (-alloy) > metal > plastic/GRP. Parts made of plastic are more likely to break. | Preferred materials for the case are aluminium or metal. |
A case with a standard bezel for the external connections of the mainboard can easily be upgraded by replacing a modern MB. | In contrast, most existing NAS boxes can not be upgraded easily, unless one is able & willing to cut off the bezel from the rear side of the case. |
Parts (MB, PSU) of propietary sizes can not easily be replaced by commodity parts of standard sizes. | Avoid devices with a motherboard of a propietary form factor. Watch out for the connections for the PSU: ATX-type is the most compatible (caution: ATX 1.x vs. ATX 2.x). |
For disks >9 TB, the likelyhood of an undetected bit-failure is astonishingly high. Naturally, this can also affect a stored checksum. | For disks >9 TB, a 2-way mirror or any std. RAIDx with one parity disk, is not safe anymore. Instead, use at least 3-way mirrors or RAID with more than one parity disk. The point here is that a 4-disk NAS box can then only be used for a 3-way mirror with one spare, not as RAIDn, n>=3. |
Two (or more) network interfaces can be configured for automatic fail-over, and/or bundled with a standardized protocol (LACP). | Usually, one network iface is sufficient for most use cases, until the day when this plastic clip to secure the cable into the plug breaks... |
Any reasonable means of remote management, e.g. KVM-switch, avoids the need to plug in a keyboard and monitor to access the console before the OS runs. A serial console is only ok if the boot loader is runnable. | A dedicated network iface for LOM/OOB management allows to place the box in a remote office, cellar, cubbyhole, or wardrobe (ventilation!). IPMI is a standardized protocol and even some mITX mainboards support it. |
Last edited: