Storage server advise

Hi,

I'm in need for a storage server for our bhyve vm...

I have an offer for a PowerEdge C2100 Rack Server for £500...
It come with 2X Intel Xeon X5650
And 24x 146gb storage..

Could anyone plus advice if I should get it
 
What is your use case? Speeds and feeds?

I would propose one of two alternatives. Alternative 1 is an Intel Atom, running at 1GHz, with 2GB of RAM, built-in single gigabit ethernet on the motherboard, and two SATA disks. A very efficient and convenient storage server. Alternative 2 is a pair of dual-socket motherboard with the highest-powered Intel server grade chips you can get (a pair of IBM Power9 CPUs would be even better, but those are expensive to normal people), put 3-4 LSI/Broadcom/Avago SAS cards on the PCIe bus, and for networking add 3-4 Mellanox 100gig cards (preferably running as Infiniband, since it has lower latency than Ethernet). Then connect a dozen external disk enclosures (I'm partial to NetApp or Seagate enclosures) with ~60-100 SAS disks each, fill them with 12 or 14TB drives, and add a handful of SSDs for booting and internal fast caches in each server. This alternative is neither efficient nor convenient, but it can serve 30-40 GByte/second easily, and store about 10PiB. It also happens to cost perhaps 100x or 1000x more than alternative 1.

I have built both alternatives. They both work excellently, and are cost-effective and productive for their respective intended usage. Before I can answer the question of what hardware you should use, you need to tell me what your requirements are: how much capacity and bandwidth? Are you serving block storage or a protocol (like CIFS or NFS)? What is your workload?

Speaking of your intended hardware: Dell rack-mount machines have a good reputation; the C2100 is one of their standard pizza box machines, so it should work fine. How will you connect your external disk enclosure? Where will you put the network connectivity?

But using 24 drives of 146G each seems insane. Your total capacity will be only 3TB (probably less after taking redundancy into account). You can get that much more easily with 1 or 2 modern drives. Also 146gig drives have not been manufactured in many years, so you are looking at old or used drives. Those will have significantly higher failure rates than new production drives, and with 24 drives you will have 12x or 24x as many failures as with 1 or 2 drives. With this many drives, you will need a good RAID implementation, and you will be replacing drives regularly. You might think that you need that many actuators for high random IO workloads, but if that's the case, you should just use one or two modern SSDs.
 
ralphbsz
At the moment I run 2 R610 in carp environment and I am running out of storage. Both R610 for 6x146GB.

The reason to have 1 storage server is for a few reasons..

1. I run mainly WordPress website and in order to be able to provide HA, I need a shared storage for the session files and file been uploaded etc..
2. I run a few ubuntu server via bhyve. I want to be able to do bhyve live migration for server maintenance. There again, it requires a shared storage fot it to work.
3. In future I might offer vps to a few clients that will run on bhyve and storage will become an issue.
4. I run everything on zfs so it would be cool to have zfs snapshot sent every couple of minutes for backup

I am aware that 146GB is small and old but the tough process is that once I created the zpool, I can replace the disk for a bigger size as and when required for newer ones

As I don't really know how share storage work in FreeBSD I was considering using FreeNAS via iscsi. This decision is still in progress as I now that glusterfs is also a FreeBSD possibility.

I have a lot of R610 (all with 6x146gb) at my disposal at the moment.. I am better sticking with those and have bigger drives?
I welcome and advise.

Thank you
 
I also have 2 Dell PowerVault MD1200 fully loaded with 12 x 3TB 7.2k.. not sure if they are a better fit or how to implement it all
 
£500
Could anyone plus advice if I should get it
2 Simple answers:
1.: NO, not for that price because the first thing you have to do is change the drives with higher cap drives(additional cost)- 24 drives consume also much power compared to the capacity( and you won't have 24x146 because of RAID or ZFS mirrors). Today 24x146 is totally useless for a storage server.
2.: Perhaps YES for a better price if fully loaded with 192 GIG RAM , Dual 10GBe(or FC HBA) , IT-mode(real JBOD)- HBA for ZFS ans so on ... ;-)
--edit : didn't view your previous 2 posts (posting at the same time) --
 
.... fully loaded with 12 x 3TB 7.2k.. .....
sound better than 24x146GB :)

I was considering using....... iscsi. ... glusterfs ....

you can consider FibreChannel for connecting machines (FC supports long cable-distances) and the HBA on the storage machine(if not FC itself) should support JBOD for ZFS

---edit: ---
iSCSI is for GBe-HBA / FC-protocol for FC-HBA
 
here is what I have in term of hardware..
10+ R610 48GB RAM
3x Dell PowerCOnnect5548
2x Dell PowerVAult MD1200
 
here is what I have in term of hardware..
10+ R610 48GB RAM
3x Dell PowerCOnnect5548
2x Dell PowerVAult MD1200
you've got all that expensive hardware???, WOW, you really don't need that additional machine for £500 :)
one moment please, let's think about about how to implement your storage-server....
 
I'm still reading tech specs of your stunning hardware-park :)
first thoughts:
pull RAM out of some R610 and load 1 (or 2 ) machine(s) fully RAM...
pull 1 Perc H810 HBA out of MD1200 and put it in the fully-RAM-loaded R610.
you will have 6Gbit/s SAS- connection from R610 to MD1200 , enough for the first..
with 12x 3TB you are free from running-out-of-storage-problem (additionally you have a backup MD1200 with additional 12x 3TB )...
later you can buy another H810(if needed ) and uplink the next MD1200...
(litte) problem: H810 is NOT JBOD-supported for ZFS... I'm still thinking about how to RAID it with ZFS....
the PowerCOnnect5548 got 2x 10GBe (and a lot of 1GBe) and 2x SFP+(I'm still thinking about whether useful for your setup or not) .... see you later
 
... so your connection-uplink from FreeBSD-Server to storage server is SAS/6GBs- advantage: it is uplinked from 1 to the other HBA , mounted as local storage .. so no need of iSCSi or FC for the first time...
you could e.g. uplink that new 3rd R610 to your other 2 R610 via FC or GBe additionally FC-HBAs to share storage all over your FBSD-machines... that's where it comes to PowerCOnnect5548...
O.K, we must think of power-consumption now if using 2 or 3 PowerCOnnect t for uplink via 10GBe... but for the first time you have all storage you need available(don't know whether you have got SAS-cables/ you will need
1 external mini-SAS SFF8088 to connect 1 R610 to 1 MD1200 via 2xH810 )- should be an relatively easy setup .. and you're not alone here ;-)..

first summary / estimates for your first setup: 1 H810 HBA pciE out of 1 MD1200 into 1 R610 pciE -
1 external mini-SAS SFF8088 to connect the 2 machines .
just access your new external storage-bomber-disks 12x 3TB locally via H810 HBA under your FreeBSD-machine .
I hope I did not mess up anything about your existing hardware
:)

---- offTopic: ---
what I personally do:
1 Storage server with JBOD (IT-mode flashed HBA) ZFS( not HW-RAID), connected via FibreChannel with other Unix servers (also ZFS JBOD).
FC has its own disk sharing protocol, like iSCSi.
After the FC connection, I mount the storage server disks via ZFS / zpool import on the other servers.
With several FC HBAs I can also work in the opposite direction (storage server mounts disks from other server).
My FC switch has recently burned out so I'm currently working with many FC HBAs via direct connection until I have time to get a new switch.
------
It would also be feasible for relatively little money to connect your new R610 / MD1200 setup via FiberChannel or iSCSi to your other R610 via FC switch or GBe, so you can scale as needed. I like FibreChannel,Theoretically, the fc-cables can be routed from machine to machine to the distance of 2 houses or even further ...
But for now you don't need to setup FC/iSCSi-protocol because you can just uplink via SAS
--

--
extremely offTopic: I'm using IBM/Fujitsu/Apple Hardware , just to clarify that this here is not a DELL-advertisng thread. But I would consider using DELL if I would find a 12x3TB MD1200 in my basement . Ha Ha :)
--
 
To begin with: My two alternatives there were not meant seriously as practical examples. Only to tell you what the extremes are, and that there is an enormous amount of room in the middle.

First: You say you have two business logic servers for redundancy. That's good. But then you are proposing to have only one storage server (for a total of 3 servers). What makes you think that the storage server will be reliable enough? The obvious answer is: Also make the storage servers into a pair (now you have 4 servers). Unfortunately, I don't know of a free software active-active storage server solution that provides instantaneous failover (all the ones I have built or used were proprietary and expensive). Here is what I would propose: Get rid of the storage server completely, and connect your Wordpress servers both directly to the external SAS enclosure. Then in some fashion make sure that only one of the two servers is ever "running", meaning only one has the disks in the enclosure mounted. No, I don't know how to do that, but there must be some way to build an active/passive failover system, using ingredients such as CARP and ZFS.

At that point, you have already greatly improved the reliability of your system, by getting rid of 1 or 2 servers: Everything that is not there will not fail. You have a single point of failure in your storage enclosure, but those tend to be pretty reliable (compared to disks and to servers). And if that bothers you, use both storage enclosures, connect both to both servers (takes a lot of cables), and make sure your RAID pairs are always split over storage enclosures.

Next: You really need to get rid of your 146 gig disks, for reliability and maintenance reasons. Look at it this way: The MTBF specification of disk drives is typically 1.5M hours. But the numbers that one sees in reality are typically only half of that (and yes, I do have too much experience with that, mostly painful). Furthermore, your disks are old, and old disks fail way more often then young ones (the 1.5M or 750K hours is an average between old and new). So for fun, let's say that your disks will fail twice as often as expected. Now multiply the failure rate by 24, and you will get a disk failure on average every 2 years (the math is simply: 1.5M hours /2 for my experience /2 for your disks being old /24 for how many disks you have, then convert from hours to years). This means you need serious RAID for your system. And where are you going to get spare disks from?

What I would do is to go buy four relatively modern near line SAS disks. I think 4TB SAS disks are pretty cheap these days; a while ago Western Digital had a fire sale on "new old stock" of those. You could buy four of them, put two in each SAS enclosure, and use 4-way mirroring in ZFS. That will be VERY reliable. And it is more capacity than your 24 x 146gig drives. If the IO rate of those disks is not good enough, buy a few SSDs instead.

Other than the software problem (of how to get ZFS to do standby / failover), does this seem plausible?
 
Back
Top