MySQL Server Facing Internet

I write a lot of VB.NET Windows desktop applications that communicate centrally to MySQL servers. All of the desktop applications use the official MySQL connectors and all users in the system are set to REQUIRE SSL. I've got permissions on databases, tables, and columns very locked down. So all remote connections are encrypted, very secure passwords, etc. This has all worked very well, I see great results in providing a strong database backend to my Windows applications.

After the recent vulnerability found in MySQL, which doesn't appear to affect BSD systems, it's got me thinking more about security. Are there other approaches for connecting into the central MySQL server other than directly to it via the Internet? Does anyone have positive or negative things to say about this approach? Clearly I understand the security implications and the need for encryption and strong passwords and permissions, but I'm open to looking at this requirement in a different light.

It's possible what I'm doing is just fine so long that I'm paying very close attention to security advisories, and maintaining strong security practices both on the server side and the client side.

Thanks for any thoughts you can contribute!
 
I had similar concerns. I setup VPN servers (L2TP and PPTP using security/ipsec-tools and net/mpd5), so users can connect into the local network. Then there is no need anymore for the database server being accessed directly from the internet, and it listens on the local network only.

It might well be that said VPN tools got also unknown security issues, and I don't even assume that these offer an inherently more secure connection than mysql+ssl, however, using VPN before MySQL, adds another wall that an attacker has to break into, before he can even think of attacking MySQL.

The clients can be setup to connect via VPN automatically, so, besides the setting-up procedure, there won't be much inconvenience for your users in using VPN.

Best regards

Rolf
 
Interesting approach... A seamless solution to the client is certainly desired, we install into a lot of large companies who have strict policies and I wonder if this approach would be too invasive in the way it interacts with the operating system, especially if the software is install on servers... But indeed has me thinking. MySQL has proven to be an excellent way to power these applications so I want to make it work. I don't see why MySQL can't be a perfectly decent solution to power cloud based solutions if handled correctly.

I also suppose I could have MySQL listen on a non-standard port which might prevent brute force attacks, but still not a perfect solution.
 
mlager said:
I also suppose I could have MySQL listen on a non-standard port which might prevent brute force attacks, but still not a perfect solution.

It can still be scanned.

You can put it into a jail( or server) behind a firewall. Even if you where using postgresql changing the defualt postgres username is just security by obscurity. Take a look at security(7) and firewall(7).

I've stopped( slowed down actually) distributed brutes with a small custom script using pf(4). Theres nothing wrong with putting your own policy in place.
 
I've got the MySQL server in a jail, so it is isolated from other services, but I guess my main concern would be protecting the data within the MySQL server.
 
You can always use STunnel with keys to access MySQL or any other service.

Web <--> stunnel <--> MySQL

This way only clients with certificates will be able to connect to stunnel and you can set it up to listen on just about any port. Scanners won't know what's behind it. I use it for various things and it's been serving me well for years.
 
I think REQUIRE SSL doesn't protect you against bruteforce attacks. It is better to use VPN (IPsec, OpenVPN etc.) or some tunneling with client certificate or password auth (stunnel, ssh...). VPN can be problem in some corporate network scenario. And VPN on Windows side requires privileges to manipulate TUN/TAP and IPs. You can use SSH tunneling method with passphrase or with client key auth. (use plink.exe from Putty to make a tunnel)

This way you will have another layer of security and MySQL server will not be publicly available.
 
I took the approach of the SSH tunnel and I am very pleased. Rather than using plink.exe, I actually used some .NET SSH components by Chilkat so everything takes place from within the code. The tunnel gets established and then I talk to the MySQL server through it. I still use REQUIRE SSL on the MySQL server as just an added barrier. This worked well, I appreciate everyone's ideas and responses. Clearly having SSH face the Internet is a better approach than having MySQL face the Internet. It would probably be a bit of an effort to break through SSH and then again break through to MySQL.
 
You should be more concerned about what happens before the traffic enters the encrypted ssh tunnel and after it arrives at its destination than someone trying to crack the encryption of the tunnel itself.
 
Yes that's a good point, but if the client application is establishing an SSL connection with the MySQL server through the SSH tunnel, then that traffic would be encrypted from the client to the tunnel and then to the MySQL server. I did some testing with tcpdump and there was nothing plain text coming out of the client as well as no plain text on the server side. I did this with the mindset that one of our customers could potentially sniff the traffic to see what was going on, but with the SSL encrypted connection, it doesn't seem that it would be possible. I don't even want anyone picking up on the data structures by seeing statements in tcpdump, which is my main reasoning for SSL. Am I off base?
 
That's pretty much the point of using encryptions but if you are already using an SSH tunnel that very effectively hides the details of the encapsulated traffic, why do you need SSL on top of that?
 
I suppose for an added layer of security. I distribute a customer SSL cert to each client that must load that into their application. Without that cert, even knowing the login and password to the MySQL server would do them no good. I guess it's not so much for more encryption, but for an added authentication layer.
 
kpa said:
You should be more concerned about what happens before the traffic enters the encrypted ssh tunnel and after it arrives at its destination than someone trying to crack the encryption of the tunnel itself.

This.

Be very careful about what you let be injected into your SQL server.

It doesn't matter if you have an SSL/SSH/IPsec tunnel between the hosts - if someone can perform an SQL injection attack through one of your web pages, they can potentially own the SQL server. Or, at least perform anything the end user's SQL account can do - without any exploit being used to own SQL.

Its not quite as simple as "encrypt the link" - you need to sanitize the traffic that goes across it as well.

I'd jail each customer's SQL instance so that a zero day in MySQL exploited via SQL injection attack on one customer's website won't affect your other customers.

Yes, it will perhaps need more resources and admin overhead, but I don't really see a secure way around it. Each individual customer can still be owned via an SQL injection exploit in their website code, but there's not really a lot you can do about that (other than perhaps sending SQL queries through some sort of intelligent deep packet inspection engine, but I'm not sure if there's much out there?)
 
Very good ideas as well, throAU. All of my applications that talk to the MySQL server are actually FAT desktop based applications in Windows and some in Linux... That being said, obviously injection attacks are possible in those too and I'm very concious of those during the design and build of the apps. But none-the-less, you bring up very good points. I've actually done "jailing" but not quite as you've described. In my scenario, each customer has their own login and set of customer specific tables. Their login has very minimal permissions that says exactly what they can do to just their tables. This was, injection attacks would only affect that customers data. A separate jailed MySQL instance for each customer indeed sounds not fun at all from an administrative and resource consumption perspective.

This has been a great thread and has sparked my mind in many ways, so thank all for your input!
 
mlager said:
I've actually done "jailing" but not quite as you've described. In my scenario, each customer has their own login and set of customer specific tables. Their login has very minimal permissions that says exactly what they can do to just their tables. This was, injection attacks would only affect that customers data. A separate jailed MySQL instance for each customer indeed sounds not fun at all from an administrative and resource consumption perspective.

This has been a great thread and has sparked my mind in many ways, so thank all for your input!

Yup, that's probably a reasonable trade-off, so long as you are sure to keep MySQL up to date.

An exploit to take down all your users DBs from a user sesson on the DB server is still potentially possible though, if one is found.

Security is always a balance of admin overhead/resource consumption vs. risk vs. consequence. There's generally always more you could do to further secure your app/network, but there comes a point where it just isn't feasible in terms of $$ or man power.
 
Back
Top