HAproxy help

For the last number of years I've been using HAProxy to accept 80/443 connections and pass them back to 2 different internal websites that both listen on 80/443. This has worked well, except it's very rigid since I have HAproxy doing the 301 redirect from 80 to 443. This mainly causes issues when dealing with getting Let's Encrypt certificates since it'll often require the use of port 80 for verification. I've been doing the DNS challenge to get around that, but I want to make this setup less rigid since I'm adding a few other servers to the mix.

First off, no, I don't want to do TLS termination at HAProxy. I like passing the whole connection to my backend. Here are my configs right now:

Code:
global
        ulimit-n        65536
        log   127.0.0.1 local1 info notice
        stats socket /tmp/haproxy.stats mode 660 level admin
        stats timeout 30s
        maxconn 4096
        daemon


defaults
        log     global
        mode    tcp
        option  tcplog
        option  dontlognull
        timeout connect 15s
        timeout client  15s
        timeout server  15s


frontend localhost80
        bind *:80
        log global
        mode http

        redirect scheme https code 301 if !{ ssl_fc }


frontend localhost443
        bind *:443
        option tcplog
        mode tcp

        tcp-request inspect-delay 15s
        tcp-request content accept if { req_ssl_hello_type 1 }

        acl is_website11 req_ssl_sni -i website1.example.com
        acl is_website21 req_ssl_sni -i website2.example.com

        use_backend web1cluster if is_website11
        use_backend web2cluster if is_website21


backend web1cluster
        mode tcp

        stick-table type binary len 32 size 30k expire 30m

        acl clienthello req_ssl_hello_type 1
        acl serverhello rep_ssl_hello_type 2

        tcp-request inspect-delay 5s
        tcp-request content accept if clienthello

        tcp-response content accept if serverhello

        stick on payload_lv(43,1) if clienthello
        stick store-response payload_lv(43,1) if serverhello

        server is_website1 192.168.10.42:443 check


backend web2cluster
        mode tcp

        stick-table type binary len 32 size 30k expire 30m

        acl clienthello req_ssl_hello_type 1
        acl serverhello rep_ssl_hello_type 2


        tcp-request inspect-delay 5s
        tcp-request content accept if clienthello
        tcp-response content accept if serverhello

        stick on payload_lv(43,1) if clienthello
        stick store-response payload_lv(43,1) if serverhello

        server is_website2 192.168.10.43:443 check

I think my work really has to be done in the `frontend localhost80` block. I don't know what to change it to though. I still have to have it read the SNI headers to see what the website is. My thoughts were if I changed my port 80 frontend and added proper backends on port 80 I'd solve the issue, but it doesn't appear to work:

Code:
frontend localhost80
        bind *:80
        log global
        mode tcp
        
        acl is_website1 hdr(Host) -i website1.example.com
        acl is_website2 hdr(Host) -i website2.example.com
        
        use_backend httpwebsite1 if is_website1
        use_backend httpwebsite2 if is_website2
        
backend is_website1
        server is_website1 192.168.10.42:80
        
        
backend is_website2
        server is_website2 192.168.10.43:80

So my end game here is, with this change above in the second block, I can still make proper connections on port 443 to my hosts, but any attempt at port 80 seems to go nowhere. My end game is to have my web servers do the 301 redirects, not HAProxy
 
I want to make this setup less rigid

What do you mean exactly?

I still have to have it read the SNI headers to see what the website is

You're reading the the HTTP Host header in the example provided yet you're using TCP mode which I'm betting won't work. There's no reason to use TCP mode for port 80 AFAIK. Try using HTTP mode.

If you simply want Let's Encrypt to be able to use port 80 while everything else gets redirected, you could also just match an ACL on the URL scheme it uses and redirect everything else to port 443.
 
Don't use TCP mode, use HTTP instead. Let SSL terminate on HAProxy (much easier to deal with) and connect to your backends with 'normal' HTTP. Use a local net/nginx (or something else) to capture the /.well-known/acme-challenge and direct it to the local nginx.

Code:
acl is_letsencrypt path_beg /.well-known/acme-challenge/
{...}
use_backend local if is_letsencrypt
{...}
backend local
        option httpchk GET /up.txt
        server localhost 127.0.0.1:80 check

Then you can do the whole refresh and check for Let's Encrypt locally on your HAProxy machine.
 
How would I handle internal network hosts that I want to connect via HTTPS though? I'd still need a certificate for those services. I don't like having local unencrypted content. With how easy it is to get a cert, I don't see a reason to have cleartext transmissions.

You're reading the the HTTP Host header in the example provided yet you're using TCP mode which I'm betting won't work. There's no reason to use TCP mode for port 80 AFAIK. Try using HTTP mode.

When I do this I get these errors

Code:
[ALERT] 321/120855 (75005) : http frontend 'localhost80' (/usr/local/etc/haproxy.conf:41) tries to use incompatible tcp backend 'website1' (/usr/local/etc/haproxy.conf:78) in a 'use_backend' rule (see 'mode').
[ALERT] 321/120855 (75005) : http frontend 'localhost80' (/usr/local/etc/haproxy.conf:41) tries to use incompatible tcp backend 'website1' (/usr/local/etc/haproxy.conf:78) in a 'use_backend' rule (see 'mode').
[ALERT] 321/120855 (75005) : http frontend 'localhost80' (/usr/local/etc/haproxy.conf:41) tries to use incompatible tcp backend 'website2' (/usr/local/etc/haproxy.conf:83) in a 'use_backend' rule (see 'mode').
[ALERT] 321/120855 (75005) : http frontend 'localhost80' (/usr/local/etc/haproxy.conf:41) tries to use incompatible tcp backend 'website2' (/usr/local/etc/haproxy.conf:83) in a 'use_backend' rule (see 'mode').
[ALERT] 321/120855 (75005) : Fatal errors found in configuration.
/usr/local/etc/rc.d/haproxy: WARNING: failed precmd routine for haproxy

So I think it has to stay TCP, unless I can find another way to pass to the backend...I may end up doing just lets encrypt as you suggested. It may be easier
 
You have mode tcp as your defaults so I believe you'd have to set mode http in those backends explicitly, otherwise they will default to TCP as you can see in the logs there.
 
Don't use TCP mode, use HTTP instead. Let SSL terminate on HAProxy (much easier to deal with) and connect to your backends with 'normal' HTTP.

Then, when the enemy hosts have taken the perimeter, and attained abode on the reverse proxy, they don't even need to hack anything, they just listen to the ongoing flow.
(I'm unsure what would be best practices with this.)
 
Then, when the enemy hosts have taken the perimeter, and attained abode on the reverse proxy, they don't even need to hack anything, they just listen to the ongoing flow.
If they've captured the perimeter they can already do that any way. A big bonus of being able to actually see the traffic on the backends is that it's a nice place to put an IDS there.

But if you must, there's no reason you can't use self-signed certificates on the HAProxy->Backends connections. Nobody is going to see it, clients connect to the SSL on HAproxy.
 
I must be misunderstanding you SirDice. Let me see if I'm getting you right here.

Externally, HAProxy is already handling the connections sufficiently well. But for inside, I set DNS records for website1 and website2 to point to HAProxy and then HAProxy connects to the backends?

EDIT: Additional thought to clarify.

So, if the above is correct, I can have a wildcard cert (or certs that aren't wildcard) that are created for HAProxy as you suggest.
 
Externally, HAProxy is already handling the connections sufficiently well. But for inside, I set DNS records for website1 and website2 to point to HAProxy and then HAProxy connects to the backends?
There would be no need for different internal DNS records, you can connect to it's 'normal' external address. This works because HAProxy is, as the name already implies, a proxy. If you use PF (or some other firewall) NAT redirections things would be different, in that case you have to resort to hairpinning or split-horizon DNS.
 
There would be no need for different internal DNS records, you can connect to it's 'normal' external address. This works because HAProxy is, as the name already implies, a proxy. If you use PF (or some other firewall) NAT redirections things would be different, in that case you have to resort to hairpinning or split-horizon DNS.

I do have a pfSense router that is running PF and has some NAT rules from the internet to my local servers so I've already been doing split-horizon DNS since before that I would be unable to access my servers internally
 
Back
Top