soliciting input for compute node and storage server hardware

jrm@

Developer
I work in a group that does biological modelling. We are looking to get more computing power and more disk space and below is what we're considering. As far as I can tell there will be no issues putting FreeBSD on both the compute node and the storage server, but if anyone has experience with this hardware, your input is welcomed and appreciated.

Compute Node:
  • Supermicro AS-1042G-TF 4 Way Server
  • 4 x AMD G34 6238 CPU (2.6Ghz)
  • 256 GB Memory (64GB per CPU)
  • DVD Rom Drive
  • 2 x WD500GB RE4 Enterprise Sata Drive

Storage Server:
  • Asus RS300-E7-PS4 1U Server
  • E3-1230V2 Xeon CPU
  • 32GB Memory
  • 4 x Intel 60GB SSD (520 Series)
  • LSI 9205-8e SAS Controller
  • Supermicro SC847E16-RJB0D1 filled with WD30EFRX 3TB Hard Drives
 
Thanks @break19. We're going with the AMD G34 6238 because of price/core. With four of them, we'll have a total of 48 cores for a decent price.
 
jrm said:
I work in a group that does biological modelling. We are looking to get more computing power and more disk space and below is what we're considering. As far as I can tell there will be no issues putting FreeBSD on both the compute node and the storage server, but if anyone has experience with this hardware, your input is welcomed and appreciated.

Compute Node:
  • Supermicro AS-1042G-TF 4 Way Server
  • 4 x AMD G34 6238 CPU (2.6Ghz)
  • 256 GB Memory (64GB per CPU)
  • DVD Rom Drive
  • 2 x WD500GB RE4 Enterprise Sata Drive

The Board is tested with FreeBSD 8.2 and works only with "M. SATA (without RAID, AHCI mode)"
http://www.supermicro.com/Aplus/support/resources/OS/OS_Comp_SR5690.cfm

But thats no big deal i guess.


jrm said:
Storage Server:
  • Asus RS300-E7-PS4 1U Server
  • E3-1230V2 Xeon CPU
  • 32GB Memory
  • 4 x Intel 60GB SSD (520 Series)
  • LSI 9205-8e SAS Controller
  • Supermicro SC847E16-RJB0D1 filled with WD30EFRX 3TB Hard Drives

The Supermicro chassis needs a "Chassis Power Card" too, to power up. (EDIT: I dont know if it is already built in.)
Make sure drives, HBA, and chassis are compatible.
With the SAS2-847EL2 backplane for SC847E16-RJB0D1 you can configure Single Host Bus Adapter Failover if the Controller support that mode and 4x 6Gb/s is enough for you.

---

If you want to use ZFS, i would switch to a better CPU, more RAM and bigger SSDs.
What is the workload on the storage server? Billions of small files or some really big one? More writing or reading?
 
Thanks @User23;

There will be mostly many small files, but also large files. The typical usage scenario would be to grab some biological data (genetic sequences stored in large archives) and analyze them with different models generating many small files.

We will be using zfs, so I will look into your storage server recommendations regarding CPU, RAM and SSDs.
 
Some theoretical numbers about how many data needs to be read and written would help planning that :)

--
You might already know about the following details.

If you have a lot of data to write using NFS you might need a pair (mirror) of SSDs as ZIL.

http://constantin.glez.de/blog/2011/02/frequently-asked-questions-about-flash-memory-ssds-and-zfs
http://constantin.glez.de/blog/2010/07/solaris-zfs-synchronous-writes-and-zil-explained

--

If you want to force the l2arc SSDs to cache big files, you need to tune it:
http://wiki.freebsd.org/ZFSTuningGuide#L2ARC_discussion

--

Do you know yet what raid level you want to use on that 45 drives?

Some old ZFS raid level benchmarks (without ZFS tuning): http://forums.freebsd.org/showpost.php?p=119932&postcount=13
 
Back
Top