ZFS DAS vs iSCSI

Hello all,

I have 1 PowerEdge R610 Production server that host and all my company websites (12 websites on wordpress, ticket, crm), database and email.
Until not so long ago, I was quite happy in my little bubble and that setup.
Since registering here in 2011, I can see that I have learned a lot through this forum and I am now in the next phase in my tech life and realize that I am a sitting duck and my setup is a disaster waiting to happen.
If I loose the server, I can restore it from backup but I will suffer a great data lost as the backup are nightly and a serious 4 - 6 hours downtime for the restore process to finish.

My plan is to create a HA environment where I can cope with hardware failure and minimize single point of failure.

I have at my disposal:
10x Dell PowerEdge R610
2x Dell PowerVault MD1200 with 12x 3TB
2x Dell PowerConnect 5548
1x Dell PowerConnect 5324

I got the whole lot for just under £1000 so I figured I could resell if it was not good for me.

I am looking at a long-term solution and trying to understand how to setup the above hardware.

Will I be better to put 6 large disks inside 2x R610 to make it 2 sans in failover and connect the rest of the server via iSCSI or use the 2x MD1200 that I have via DAS? Aternatively, I could attach 1 R610 to the MD1200 and use iSCSI via the R610.
I tried to look online and it looks like I can only have a maximum of 2 hosts attached to the MD1200. Is that correct or can I have more than hosts?
Which will provide better performance ZFS?
With the DAS, how many SAS card do I need to add to my R610? I have been looking at various schema and some use 1 SAS HBA per host and other schemas 2 HBA per host.

Thank you
 
Under £1000 for 10 r610 servers and all the rest seems incredibly cheap!! We often buy refurbished stock for jobs where we don't need the latest performance or oem support and it's still around £400 minimum each for a 610 at the moment.

There's all sorts of configurations you could do, and it's very difficult to say what's best. One thing I would say is don't set something complex up that you don't really understand fully. It will just become a huge problem when something goes wrong or you have obscure issues. Things are a bit different when you spend 100k and have full vendor involvement for the setup and support.

I'm not really sure on the actual disk connections as it depends on the configuration of the disk shelves and the controllers in the servers.

I would probably do something like the following to keep things simple.

r610 connected to MD1200, running as the main web server and using the powervault as storage. (Tbh, you could probably just use internal storage and have the powervaults as purely backup/archive (onsite and offsite ideally)).

Second r610 and MD1200 configured exactly the same, just as a backup. Use send/recv to send regular copies from live to backup. As the vault is only storing file data it probably won't be changing as much as the the database will

Third r610 as a database server with internal storage (ssd's will greatly improve db performance), and a fourth acting as a replication slave so you have a near real-time copy of the data.

This way you can switch to the backup web server/vault or database server if the primary has any problems with minimal data loss. It also doesn't involve any complex master/master database or file storage solutions which are great when you have high-end, vendor supported hardware, but a nightmare when you're trying to manage it by hand.

If you want to make more use of the r610 servers for web/application, you can also consider making the first r610 just an nfs server, then have multiple application servers that have the same storage mounted over nfs.

There are of course a dozen other ways you could set this up that all have their own benefits and drawbacks.
 
usdmatt thank you for the reply.
I understand from your suggestion that I will have failover with 2x databases as master/slave.
But how can the webserver take over? will it be a manual process or do I need to set carp
 
I would use 2 machines for HAProxy, set up to use CARP. Then use 3 or more servers as webservers. HAProxy will take care of load-balancing and fail-over of the websites. CARP will take care of fail-over between the HAProxy machines.
 
I would use 2 machines for HAProxy, set up to use CARP. Then use 3 or more servers as webservers. HAProxy will take care of load-balancing and fail-over of the websites. CARP will take care of fail-over between the HAProxy machines.
Would you put the databases on a separate server on the webservers?
 
Separate server(s). We typically have one master MySQL server and one read-only slave. The read-only slave can be switched to a master role in case the primary server dies or breaks. We then have one or two additional slaves, typically set to run about an hour behind. Those are the backups.
 
Yeah, you might want to use smaller servers for that. HAProxy doesn't require a lot of CPU power, you only need good network cards. Our HAProxies have 2 disks, mirrored. The webservers have 2 disks too, also mirrored. Used hardware RAID and UFS for both.

Because we have the webservers behind HAProxy I can easily take one of them offline (for updates and whatnot) without interfering with the service.
 
Depends on how big the website is and how you deploy it. For us it didn't matter, websites aren't too big and they already had some script that deployed the websites to the webservers. So a shared storage wasn't needed.
 
Depends on how big the website is and how you deploy it.
We run wordpress so load balancing this
Sites will require a shared pool or clusterfs I believe.. Every website exist inside separate jails (1jail per domain)
 
Back
Top