Dell C6220

Hello,

Has anyone got any experience with FreeBSD on the Dell C6220 series servers (basically 4 servers in 2U). I'm looking for something to use as a virtualisation platform (ZFS + bhyve), and these seem to be fairly cheap second hand at the moment.

I'm always a bit wary as I've invariably ended up with some sort of problem in the past. The most common one being the disk controllers not supported fully, or some low end raid controller that has terrible pass-through support and no basic HBA mode. Often you end up compromising with a bunch of RAID-0 devices that you have to use raid bios to manage.

The ones I'm looking at seem to come with a standard Intel C600 controller so I don't think I should have any problems with disks? other than I might bottleneck the interface if using SSDs.

The plan would be to run ZFS & bhyve, and I believe the processors of this generation - such as E5-2670 - have the relevant support (not really interested in pass-through).
 
I had a look on ebay at these servers. They only seem to be 2 nodes with dual CPU in each node.
They do feature a 24 bay storage component. I do wonder about the backplane. Is it 2 each 12 bays or 3x 8 bays.
The controller used in these is well supported by FreeBSD the LSI 9265-8i
$600 USD is a descent deal for a dual server.
The CPU is the LGA2011 v1. There seems to be a C6220 II that supports LGA2011 v2. I would recommend that if not much more money.
 
If you are interested in the dual server in a single 2U slot idea, there is also an Supermicro offering called Fat Twin.
These use 3.5" drives versus 2.5" drives used on the Dells.

Which brings up an interesting topic.
Are 2.5" drives appropriate for your sever needs.
To me the smaller the drive the more heat they displace. On top of that jam 24 drives into a very small space and you end up with alot of heat to evacuate.

In some ways I think 12 bay's of 3.5" drives are better than 24 bays of 2.5" drives. Especially when using hard drives.
3.5" drives are cheaper even in the enterprise line.

Now if you plan on using SSD's then 2.5" bays make more sense to me.
This is all subjective and just something to chew on.
 
Thanks for the input.

I'm looking at versions like the below that are 4 system. These have 6 disks per node. They also have 3.5" versions, but you only get 12 disks total, 3 per node, which limits you to either a mirror and spare, or a RAIDZ-1 of 3 disks. 6 gives a bit more flexibility (3 mirror stripe, RAIDZ-2 with 6 disks, etc). I would be using SSDs as capacity isn't a big concern and obviously the performance is vastly improved.

https://www.etb-tech.com/dell-power...4y2_TJol6ybf6GzAzVtDLBoCqNkQAvD_BwE&MSCheck=1

I can get a system complete with 4 nodes (2 x 8C Xeon, 64GB RAM, C600 SATA controller, 2x1GB/2x10GB network, no disks) for around £1700-1800. Buying them as standalone systems (i.e R620/720 sort of range), I'd probably be looking at £600-800 per server. Basically I'd be looking at getting 2 standalone servers, or spending a bit more and getting a single 4 node system.

It would be nice to have some sort of controller (such as LSI) that's either a HBA, or can be flashed to it. However, these dell machines either have a C600 controller, or their proprietary PERC stuff. I'm hoping that the C600 controller *should* work just like a normal Intel system and be well supported, but it would be nice if someone else has had a chance to play with one. Looking around I can't find anything else from other manufacturers that would give me the same sort of capacity or flexibility as the above for a similar price.
 
Just some tangential findings here. I have been buying Supermicro LGA2011 boards myself. They are divided into two categories.
X9xxx boards and X10xxx boards. The X9 boards are Sandy/Ivy Bridge and the X10 boards are Haswell/Broadwell.
The C6xx controller on the X9 boards only have 2 ea.) SATA3 connectors, the rest SATA2 whereas the X10 boards have all 10 ports SATA3.
So if you are going the SSD route you might want to ensure that each node has 6 SATA3 connectors.
The reason I mention this is that your choice seems to be Sandy Bridge CPU and those platforms were notorious for a small number of SATA3 connectors.
 
I also think you need to ask yourself, Do I need the density of a rig like this.
These are so dense to save money on datacenter costs. Nobody in a home setting needs 4 nodes in such a small space.
I also was thinking of redundancy when looking at the Fat Twin models.
I have built all my 2U rigs with Emacs dual/redundant power supplies. It can run on one or two power modules(with an annoying alarm).
With these 4 node units it appears only one power supply per node.
 
Looking further at the offering I notice two things. They do mention SATA Raid so chances are this uses all SATA3.
I did notice that they are using 2 power supplies for 4 nodes. I wonder what that means if you lose one power supply.
Do 2 nodes go down or is one 1200W power supply enough to power 4 nodes and 24 drives.
 
As far as I'm aware they are supposed to continue to run with a single power supply, although 1200W does seem tight for 4 systems.

It would sit in a data centre but density or noise is not a problem. Pretty much the main reason for looking at these is the price. I can't get 4 systems of a similar spec for anywhere near the same value, even looking at other manufacturers second hand 2/4 node kit.

I suspect the c600 controller has 2 sata3 ports and 4 sata2, which is probably the biggest issue with them. Realistically the only other option I can see is to try and find a couple of higher spec standalone servers, but I'd be hard pushed to find 2 systems with the cpu/memory capacity of these 4 nodes for much less than the price of this whole unit. It's annoying, but other than performing a scrub, there isn't much reason for a bhyve host for various FreeBSD guests to be pushing the disks at 375MBps+ that often. Some of them will be moving off a host that currently uses iSCSI over 1GB ethernet.
 
Here is a good thread with details:
 
Back
Top