Your experience with ASRock Avoton (vs Supermicro...)

Dear moderator, if I would have been better off posting this to the FreeNAS forum, please let me know right away and I'll post it there instead, though I'll be running FreeBSD, not FreeNAS.

I'm looking at the ASRock C2550D4I, because it's $100 cheaper than the Supermicro MBD-A1SRi-2758F-O. Did they work the bugs out, and are there still any major issues (eg: with their IPMI implementation, BIOS bugs, errors in the design that took a year to become clear, etc). I assume iXSystems has been working with ASRock, since they used their 8-core board in one of their NASs, but I haven't been able to find any 2014 articles Also, is it likely that the fixes to the ASRock 8-core board were applied to this 4-core one?

Best regards,
Nick
 
I'm looking at the ASRock C2550D4I, because it's $100 cheaper than the Supermicro MBD-A1SRi-2758F-O. Did they work the bugs out, and are there still any major issues (eg: with their IPMI implementation, BIOS bugs, errors in the design that took a year to become clear, etc)

Out of curiosity: Which IPMI issues, BIOS bugs and design errors are you referring to?

I've been running 10.0-RELEASE and 10.1-RELEASE (amd64) on a C2550D4I since January 2014, and haven't run into any major issues. The latest BMC and BIOS versions (00.19.00 and 2.10, respectively) have been rock solid in my experience. I'm using Crucial ECC memory FWIW.

The only issues I remember encountering are the long cold boot-up time (it can take more than 30 seconds for any video to be displayed after powering on) and having to mark the protective MBR partition inactive after sysinstall.

This being said, the board is installed in a 8-disk ZFS NAS box on my home network, and thus has only seen moderate workloads at best (file sharing for a handful of clients over NFS, SMB and AFP).

Furthermore, I've only used IPMI for monitoring temperatures and fan speeds using sysutils/freeipmi and sysutils/munin-node, as well as for modifying BIOS settings and performing upgrades over Serial-Over-LAN using sysutils/ipmitool. I don't have any experience with the BMC web interface/KVM Java Applet abomination, so I can't vouch for their stability.

Sure, I'm not enamoured with the Marvell SATA controllers, and would've like to have seen an internal USB 3.0 port on the motherboard (for running the OS off a USB stick), but it's been a great board considering its intended use and what I paid for it (around 295 EUR IIRC).
 
I'm not familiar or enthusiastic about the Asrock Atom boards, but I am still in awe of the SATA port count on A1SA7-2750F-O. If Nick hasn't responded by now, I'll assume he never will. It's ambiguous to me what bugs on what platform he meant. However, I'd like to share my own pros and cons with these systems, which many on-line forums are (dangerously) touting great web-facing machines:

Not available as of August 2015 is a microblade with a Rangeley (C2558) CPU, despite identical TDP and bga plan (FCBGA1283). Our ideal target would have hyperthreading but no server Atoms presented that. Hesitant to throw enterprise on atoms, we did system level test for many clustered roles using A1SAM-2750F vs A1SRi-2758F/2558F finding very little difference in CPU or RAM in these systems. Clusters didn't break or lag, so we moved on and purchased two quarter-populated Microblades with this lineup:

Avotons switched by MBM-GEM-001 with MBE-628E-820/MBE-628E-420/MBE-628E-816/MBE-628E-416, each blade was MBI-6418A-T7H (C2750) not MBI-6418A-T5H (C2550). The CMM features were dreamy, I was able to syphon system and IPMI MACs into my orchestration and network suite sweetly.

PROS:
  • cheap price - about 33% cheaper (~500 per Avoton 8core/8gb ddr3 vs 750 per e5 vm 8core/ht+8gb ddr3)
  • Supermicro did a fine job on the CMM tools here
  • Very low power footprint
  • 2.5Gbe network with El6.6 support out of box (finally, some pipe for SSD!)
  • Secret DOM under SATA compartment - despite manual saying no second SATA exists
CONS:
  • cheap performance - odd intermittent 2-3 second delays, plus:
  • 42% less performance for php-fpm vs KVM guest on E5-2620V2
  • 39% less performance for nginx/httpd vs KVM guest on E5-2620V2
  • 34% less performance for node vs KVM guest on E5-2620V2
  • Secret DOM under SATA compartment - DOMs suck, why can't I have a second 2.5 HDD?
  • RAM in Quanta T3048-LY8/T5032-LY6 seems to be limited to about 2.5G bw, memtest on C2750 shows about 5G, and iperf to localhost exhibits only 12.5G. Can't touch ~29G stated by Intel Ark.

Chassis differ for Intel E3/E5 switched by MBM-XEM-001 with MBE-628L-816/MBE-628L-416 including MBI-6118D-T4H / MBI-6118D-T2H (E3-1200-v3/E3-1200-v4), MBI-6118D-T4 / MBI-6118D-T2 (E3-1200-v3/E3-1200-v4), and MBI-6128R-T2X / MBI-6128R-T2 (E5-2600-v3). We tested MBI-6128R-T2X in the wrong chassis to find both ethernet NICs terminate to the same switch when in the wrong chassis. The density on the E5 and E3 microblades has some appeal, but the lack of SAS, and non-hot swappable drives make the options a poor choice for HA nodes. Our company will continue to use the ever expanding types of MicroClouds, instead. In my opinion these microblades/SOCs may be better suited for NAS/SAN, at best, while spending the extra money on full-fledged Xeon 1S systems is well worth the cost in enterprise.
 
Back
Top