Solved Http --> https on Apache 2.4

I'm trying to turn off HTTP on a fresh install of www/apache24, and enable HTTPS by default. I'm slowly getting a handle on how to work the solutions into my own httpd.conf. However, when I do research on the Internet, it seems like mod_rewrite is the preferred way to do it. But exact usage left me a bit lost:

From ibm.com:
Code:
RewriteEngine On
RewriteCond %{REQUEST_METHOD} ^OPTIONS
RewriteRule .* - [F]
This just disables the HTTP requests. I'm seeing similar solutions on StackOverflow, too.

From namecheap.com:
Code:
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]
This looks usable, but seems like it just tacks the 'S' onto HTTP requests before allowing them to complete. Oh, hold on, shouldn't RewriteCond be 'off'? if it's not HTTPS, I want to reject the connection, not rewrite it.

Thing is, namecheap.com article also suggests using VirtualHost directive:
From namecheap.com:
Code:
<VirtualHost *:80>
ServerName www.yourdomain.com
Redirect permanent / https://www.yourdomain.com/
</VirtualHost>
<VirtualHost _default_:443>
ServerName www.yourdomain.com
DocumentRoot /usr/local/apache2/htdocs
SSLEngine On
...
</VirtualHost>
This looks usable, but I wonder, is there a way to combine the mod_rewrite rules?

Basically, the VirtualHost solution looks simple. However, both VirtualHost and mod_rewrite seem to accept the insecure HTTP connections and simply rewrite/redirect them to HTTPS. I'd like to ask for some help in figuring out how to outright reject the insecure HTTP connections with a 403 error code first, and then accept HTTPS requests and respond by serving up a page or a file over HTTPS.

But if there's something I missed, I welcome commentary! :D
 
I use this one for Apache:
Code:
LoadModule rewrite_module libexec/apache24/mod_rewrite.so
 :
RewriteEngine on
RewriteRule ^(/.*)$ https://%{HTTP_HOST}$1 [redirect=301]
 
If you are just trying to turn off HTTP, you don't need mod_rewrite. The latter is used to make substitutions to URLs.
The default Apache installation has a directive to listen on port 80, you need to remove that:
Code:
Listen 80

Generate a server certificate if you don't have one yet, or get it from a certificate authority (e.g. Let's Encrypt).
Then you need to tell the server to listen on port 443 (HTTPS) and create a virtual host (similar to the code you posted) for the SSL enabled site:
Code:
Listen 443
<VirtualHost *:443>
    ServerAdmin admin@yourdomain.com
    ServerName www.yourdomain.com

    DocumentRoot "/usr/local/www/yourhtmlfiles"
    <Directory "/usr/local/www/yourhtmlfiles">
        Options Indexes FollowSymLinks
        AllowOverride None
        Require all granted
    </Directory>

    SSLEngine on
    SSLCACertificateFile  path-to-your-ca-certificate.crt
    SSLCertificateFile    path-to-your-webserver-certificate.crt
    SSLCertificateKeyFile path-to-your-webserver-key.pem

</VirtualHost>

That's it. Restart the apache24 service and the SSL enabled site should appear on port 443 (the default for HTTPS), while port 80 should not be listening anymore.
Note that the CA certificate, server certificate and server key are in 3 separate files. Also, if your key is password protected, there is a directive for that - read the documentation.
 
The mod_rewrite suggestions are probably to do with http->https bounces e.g. if a user goes to http://your.site.com it will bounce to https://your.site.com.

But IIRC there are security issues with that, and other ways to do it (and I think modern browsers will just try https straight off), so as the previous answer says - just go straight to https only (port 443).
 
The mod_rewrite suggestions are probably to do with http->https bounces e.g. if a user goes to http://your.site.com it will bounce to https://your.site.com.

But IIRC there are security issues with that, and other ways to do it (and I think modern browsers will just try https straight off), so as the previous answer says - just go straight to https only (port 443).
It's an interesting point. Maybe Apache should come with SSL enabled off the shelf and disable port 80. At this point plain HTTP is a rare exception on the Internet.
They could provide some stock certificates just for the sake of making the installation work. Replace the certificates with your own and you're done.
 
Not trivial. There is probably more than one way to skin this cat, but I've been doing this a few years, and basically just edit /usr/local/etc/apache24/httpd.conf as shown in the following diff output:
Code:
(wasat@mate /usr/local/etc/apache24)$ diff httpd.conf.copy1 httpd.conf
52c52
< Listen 80
---
> #Listen 80
92c92,93
< #LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so
---
> # LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so
> LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so
148c149,150
< #LoadModule ssl_module libexec/apache24/mod_ssl.so
---
> # LoadModule ssl_module libexec/apache24/mod_ssl.so
> LoadModule ssl_module libexec/apache24/mod_ssl.so
527c533,534
< #Include etc/apache24/extra/httpd-ssl.conf
---
> # Include etc/apache24/extra/httpd-ssl.conf
> Include etc/apache24/extra/httpd-ssl.conf

Note: There are some php-related changes in my particular version of httpd.conf which I've omitted from the above output dump. This will affect the line numbers you might see in your own diff output. Note that /usr/local/etc/apache24/extra/httpd-ssl.conf is the file that contains the Listen 443 and Virtualhost directives.

Additionally, one must obtain or prepare two files, server.crt and server.key, and place them in the /usr/local/etc/apache24/ directory.

More info here on how to use the openssl genrsa -des3 -out server.key 1024 command to prepare non-signed versions the above two files for private, internal use or testing purposes:
https://www.akadia.com/services/ssh_test_certificate.html

Sorry but I don't have any documentation on how to secure proper signed versions of these files for a public IP.
 
Caveat: It's been a while and not specifically knowledgable from a current Apache perspective.

[...] I'd like to ask for some help in figuring out how to outright reject the insecure HTTP connections with a 403 error code first, and then accept HTTPS requests and respond by serving up a page or a file over HTTPS.

That suggests, to me least, that you intend to respond with a 403 and then successively and automagically want to redirect (but not via a "301-redirection-way") to the https URL (i.e also port 443). That is not how it works: a 403 respons is a "hard explicit block" (with appropriate response message) and nothing happens after that in an automatic fashion one way or the other. That would also be a very unorthodox way of responding and not very friendly to the actual human being at the other side of the internet pipe, anxiously awaiting your web content. I'd highly doubt that you'll find any reputable site that handles a specific http request like this. See also 403 Forbidden vs 401 Unauthorized HTTP responses, with reference to the correct relevant RFC's:

A 301 redirection is what is called for, unless you—as a content creator & provider—want to "miss" (darkhole?) any http request (i.e. without the "s" and listening on port 80). Normally/usually, any request that will get to land users to your (secure) website is utilized. Because the (permanent) redirection, search engines will act accordinly and work with the intended (=redirected) URL: they will not serve up the http-variant in their search results.


You're setting up the server afresh. If that means you're starting your content also afresh; i.e. you're not in a (big) migration transition to move all your content from http to https, then a virtual host setup is probably the most obvious solution. Compared to rewrite rules, some properties:
  • virtual host setup is server wide; rewriting can be localized via the placement of the .htacces file(s) which contain the rewrite rules as it acts on a per directory basis.
  • virtual host setup: must restart server to take effect; rewrite rule(s) are run for every web server http(s) request (no server restart required).
Both also make it possible to map www.your-website.com & your-website.com to either one that you prefer. (You should consider to registrate both when serving your content to the whole wide world.) See also: Redirect HTTP to HTTPS in Apache

Using https has consequences for your web content as well. For local links: use relative links (not fully qualified URL's). Use fully qualified https links for (external) internet links when available. If you use http-links or not fully qualified URLs, then you may be confronted with mixed content (mixed as in http & https). Content under your control should not be mixed: it will show itself to users as a not 100%-cleanly closed padlock icon. An exclamation mark is added; the user will note this as a not (100%) secured page. Mixed content is difficult to control when users of your website add their own content (i.e. a response/comment/question to your content for example). See also: Make intrasite URLs relative

___
P.S. the very core of https: the certificate, you've "left out", just as encryption protocol details. I gather you have that sorted; can't help you with that.
 
Last edited:
Regarding redirection: Several years ago I had servers configured to listen on both ports 80 and 443, then I used php scripting to internally redirect http: requests on port 80 into https: requests on port 443. Somewhere along the line, this stopped working, but, on the positive side, browser software had improved, so I now simply disable port 80 as shown above, and I can now rely on the Firefox browser to automatically do the same desired redirection for me. This means users can simply type a hostname or IP address with the http:// prefix, or with no http or https prefix whatsoever, and the browser will automajickally convert the URL to use the https:// prefix, seamlessly.

I suspect this will also work with browsers other than Firefox, but I can't guarantee it since I no longer support or test these other browsers. It's just too much! Hah. And nowadays this is just a hobby for me. 12 years ago I was testing and supporting Firefox, Safari, Internet Explorer, Chrome, and Opera. It was a huge PITA and took up a huge amount of time. I just don't bother with all that anymore, and I can't say I'm sorry because I'm not, rather, I'm entirely grateful that I don't have to do that anymore.
 
Caveat: It's been a while and not specifically knowledgable from a current Apache perspective.


That suggests, to me least, that you intend to respond with a 403 and then successively and automagically want to redirect (but not via a "301-redirection-way") to the https URL (i.e also port 443). That is not how it works: a 403 respons is a "hard explicit block" (with appropriate response message) and nothing happens after that in an automatic fashion one way or the other. That would also be a very unorthodox way of responding and not very friendly to the actual human being at the other side of the internet pipe, anxiously awaiting your web content. I'd highly doubt that you'll find any reputable site that handles a specific http request like this. See also 403 Forbidden vs 401 Unauthorized HTTP responses, with reference to the correct relevant RFC's:

A 301 redirection is what is called for, unless you—as a content creator & provider—want to "miss" (darkhole?) any http request (i.e. without the "s" and listening on port 80). Normally/usually, any request that will get to land users to your (secure) website is utilized. Because the (permanent) redirection, search engines will act accordinly and work with the intended (=redirected) URL: they will not serve up the http-variant in their search results.


You're setting up the server afresh. If that means you're starting your content also afresh; i.e. you're not in a (big) migration transition to move all your content from http to https, then a virtual host setup is probably the most obvious solution. Compared to rewrite rules, some properties:

  • virtual host setup is server wide; rewriting can be localized via the placement of the .htacces files which contain the rewrite rules as it acts on a per directory basis.
  • virtual host setup: must restart server to take effect; rewrite rule(s) are run for every web server http(s) request (no server restart required).
Both also make it possible to map www.your-website.com & your-website.com to either one that you prefer. (You should consider to registrate both when serving your content to the whole wide world.) See also: Redirect HTTP to HTTPS in Apache

Using https has consequences for your web content as well. For local links: use relative links (not fully qualified URL's). Use fully qualified https links for (external) internet links when available. If you use http-links or not fully qualified URLs, then you may be confronted with mixed content (mixed as in http & https). Content under your control should not be mixed: it will show itself to users as a not 100%-cleanly closed padlock icon. An exclamation mark is added; the user will note this as a not (100%) secured page. Mixed content is difficult to control when users of your website add their own content (i.e. a response/comment/question to your content for example). See also: Make intrasite URLs relative

___
P.S. the very core of https: the certificate, you've "left out", just as encryption protocol details. I gather you have that sorted; can't help you with that.
I like this info... but I intend to respond with a 403 (or a 401) to http requests, and not bother with redirections. Basically, let the client (i.e., browser) retry with an https request (upon failure with http), rather than me doing something on my end.

I do appreciate the heads up about the SSL/TLS cert generation details. That is actually my next step - studying docs, figuring out how that fits on my system, etc.
 
the browser will automajickally convert the URL to use the https:// prefix
I noticed that behaviour in Firefox too. But using this redirect method I still see many of these redirects in the logs, both from Firefox as well as other browsers (versions???). Most browsers switch to https when you enter a url manually, but what happens if they click a link starting with http? Closing port 80 means these won't show up in the logs of course. I like to see these redirects disappear before I close port 80.
 
I noticed that behaviour in Firefox too. But using this redirect method I still see many of these redirects in the logs, both from Firefox as well as other browsers (versions???). Most browsers switch to https when you enter a url manually, but what happens if they click a link starting with http? Closing port 80 means these won't show up in the logs of course. I like to see these redirects disappear before I close port 80.
Do you mean the server's logs? I don't really look at the logs that much, especially not since I've retired. I don't really know for sure, but it seems to me that there's no way the server can prevent a client browser from making a http request. My assumptions are that:
  1. The client browser sends an http request to the server,
  2. The server receives this request, and sends some sort of a no-can-do response back to the client,
  3. The client then responds to this by resending the same request, AFTER modifying the URL to use the https prefix.
So I can't really see any way to make such redirects disappear. I can't stop the client from typing http:// and sending a request, other than perhaps by breaking his or her fingers. But it's just a request.

Initially, at least where my applications are concerned, the first request is always going to be interpreted as a GET request for a login form. By the time the client receives the login form, the URL will have already been modified to have the https:// prefix. I suppose at that point, it would be possible, however unlikely, for the end-user to fill in the login name and password, and THEN modify the URL again, to have an http:// prefix. Then, the user COULD, hypothetically, wind up sending a POST request containing the password in an unencrypted form. I don't know any way I could prevent that. Breaking that user's fingers, though, at that point, might not be the worst idea in the world. Or at least take the keyboard away from such an end-user, because there's no way to interpret such behavior as accidental.

But maybe I've misread your question. I can only FULLY insure that the SERVER won't send any unencrypted data. A mischievous client is somewhat beyond my control, other than by retrospectively inspecting the logs, and taking corrective action after the unencrypted data has already been sent. It WOULD be possible to check for this in the server software, and not at all a bad idea.
 
other than perhaps by breaking his or her fingers.
LART, LART, LART!

I typically use the same set up as Geezer posted. There's no "website" on port 80, just a redirect.

Then, the user COULD, hypothetically, wind up sending a POST request containing the password in an unencrypted form. I don't know any way I could prevent that.
That's possible, but there's no actual website on port 80 so that user would just get a 404 in return.
 
LART, LART, LART!

I typically use the same set up as Geezer posted. There's no "website" on port 80, just a redirect.


That's possible, but there's no actual website on port 80 so that user would just get a 404 in return.
Nevertheless, such a user would have sent unencrypted POST data over the internet, even though port 80 is closed (or redirected) at the server. I had never really considered this before, but it does seem like a possible, however unlikely, security hole. I may play around with this when I have time.

It would be difficult to deal with in a php script, unless port 80 was opened up again. It would be more practical, and safer, to deal with it in the Apache server software, or maybe even in the browser software. There might already be some such provisions of which I'm unaware. I've never tested this behavior with POST requests, only with GETs. But there is some food for thought here.
 
That's possible, but there's no actual website on port 80 so that user would just get a 404 in return.
Yeah, this seems to tilt my mindset in favor of just closing port 80 altogether. Apache may be total overkill in terms of power and complexity for the first thing I'm trying to set it up for (a Poudriere/pkg repo on my LAN). However, in my experience, going for overkill straight from the start (and doing it right) sets me up for success down the road. [Analogy] Kind of like buying an A380 to fly 500 miles first, and then discovering it's good for a 11,000 mile flight, as well. A Cessna can fly 500 miles, and is much cheaper - but taking it on long trips is asking for trouble. [/Analogy]

Security is not always the biggest concern. I still see MSIE9 in the logs. Why block those?
If I saw MSIE9 in my logs, my first thought would be that somebody changed the UserAgent string for some reason. MSIE11 is no longer supported by Microsoft. And if somebody bothered to change the UA string, what's next? Is that something I can deal with safely? Might as well block 'em before I have to bother getting in deep. Don't get me wrong, I'm seeing very good info in this thread, and I'm enjoying this conversation, cuz I'm learning a LOT from it.

My assumptions are that:
  1. The client browser sends an http request to the server,
  2. The server receives this request, and sends some sort of a no-can-do response back to the client,
  3. The client then responds to this by resending the same request, AFTER modifying the URL to use the https prefix.
That's actually the correct assumptions... let the client figure it out. FWIW, this is the premise on which email servers operate (not that I have any intention to set one up).
 
Yeah, this seems to tilt my mindset in favor of just closing port 80 altogether. Apache may be total overkill in terms of power and complexity for the first thing I'm trying to set it up for (a Poudriere/pkg repo on my LAN).
I was assuming you already had Apache running on port 443 for HTTPS. It really doesn't matter to add one or more ports to the configuration in that case. You can do the same with nginx or any of the other webservers. If you have nothing running on port 80 then users might get frustrated and claim your website doesn't work because they'll be hitting a timeout (assuming your firewall simply drops the traffic to port 80) and they get a weird error message when trying to open the website (especially IE and Edge show those "helpful" error messages that make no sense at all). So I generally just opt for a simple redirect, that at least points them in the right direction.

As a side note, on my home network I have Poudriere running on a small nginx instance. For a client I've used Apache, the only reason I used Apache there was because it was already running a bunch of other websites. So it made sense to simply use what was already there.
 
I was assuming you already had Apache running on port 443 for HTTPS. It really doesn't matter to add one or more ports to the configuration in that case. You can do the same with nginx or any of the other webservers. If you have nothing running on port 80 then users might get frustrated and claim your website doesn't work because they'll be hitting a timeout (assuming your firewall simply drops the traffic to port 80) and you get a weird error message when trying to open the website (especially IE and Edge show those "helpful" error messages that make no sense at all).
I haven't gotten that far yet in my setup. It's a fresh Apache install, I'm just trying to do my homework to make sure I start off the right foot, so to speak. If anything, I'd want to make the server IPv6-only, but that's for a different thread already. Kind of like the beefy3 conversation from earlier this year.
 
[...] I don't really know for sure, but it seems to me that there's no way the server can prevent a client browser from making a http request.
[...]
I can only FULLY insure that the SERVER won't send any unencrypted data. A mischievous client is somewhat beyond my control, [...]
Nevertheless, such a user would have sent unencrypted POST data over the internet, even though port 80 is closed (or redirected) at the server.
In what scenario does a (browser) client, aka an actual user, gets to initiate an insecure/unencrypted http POST request with private data, other than by engineering such a request?

As mentioned in a previous message, after the client initiates First Contact™, a webserver setup for secure transmission, at some moment in time, sends web content to the client, for example a web page containing a form. The client/user may choose to fill that form with private data and sent it back to the server, via a POST request, again transmitted securely.

[...] Most browsers switch to https when you enter a url manually, but what happens if they click a link starting with http?
  1. A link does not contain private data (in any normal situation).
  2. A form may contain private data. A form received securely gets POSTed securely by the client.
Ad 1
A user could wilfully & intentionally put private data in that link (=edit the underlying html), for example:
  • Option A: http://www.secure-website.com?mode=login&user=John&password=XYZ* - no encryption, obviously.
  • Option B: https://www.secure-website.com?mode=login&user=John&password=XYZ - because this is part of the URL, the data will not be encrypted.
Ad 2
A user may input private data in the form and also could wilfully & intentionally edit the underlying html.
  • Option C: form change whereby forcing to use HTTP in stead HTTPS (implicitly or explicitly) in the action attribute
  • Option D: change the form method from POST to GET (the data then only appears appended to the URL-string and will not be encrypted)
(The same could be accomplished with an appropriate client side scripting language of course.)

When transmitted in this manner all four options will transmit private data unsecured over the internet to the specified server. Unless, of course, a certain forum user (I won't name names) breaks the user's fingers remotely in a timely fashion.

Ad Option C
Apparently this is seen as a "practical" situation that may arise when the web page contains an explicit http designator instead of an explicit https designator or, when using an appropriate relative URL, implied https designator. Using an explicit http designator would probably considered mixed content when the web content is sent securely by the webserver.

From Sending form data:
Note: It's possible to specify a URL that uses the HTTPS (secure HTTP) protocol. When you do this, the data is encrypted along with the rest of the request, even if the form itself is hosted on an insecure page accessed using HTTP. On the other hand, if the form is hosted on a secure page but you specify an insecure HTTP URL with the action attribute, all browsers display a security warning to the user each time they try to send data because the data will not be encrypted.
Screenshot of Firefox 92.0.1; I have replaced the appropriate relative URL with an explicit fully "http" qualified URL:

HTTP-S.png


Note: this warning does not appear when only changing the attribute method from post to get (Option D).


P.S. Best Practice - Keep Port 80 Open: I haven't seen a plausible convincing argument as to why port 80 should be closed or not listened to, as opposed to redirected to port 443; especially from a security point of view.

___
* that specification mechanism can be used when accessing an ftp server via a web browser—not recommended! Current webbrowsers make it almost impossible to open an ftp site by specifying the ftp-protocol in the URL—that is a good thing.
 
In what scenario does a (browser) client, aka an actual user, gets to initiate an insecure/unencrypted http POST request with private data, other than by engineering such a request?
You're right. Brain fart on my part. Good catch.
 
From Sending form data:
On the other hand, if the form is hosted on a secure page but you specify an insecure HTTP URL with the action attribute, all browsers display a security warning to the user each time they try to send data because the data will not be encrypted.
Yeah, if the client specifies the insecure http URL, even if it's to send form data, I'd want to respond with a "no-go" error, I'm not taking unencrypted form data from a client. It's only going to be a problem if I'm setting up Apache for an audience other than myself. But even so, I appreciate the info - yeah, there are all these useful considerations out there, and sometimes, I come across a nugget of information that I had no idea I needed to pay attention to.
 
Yeah, this seems to tilt my mindset in favor of just closing port 80 altogether. Apache may be total overkill in terms of power and complexity for the first thing I'm trying to set it up for (a Poudriere/pkg repo on my LAN). However, in my experience, going for overkill straight from the start (and doing it right) sets me up for success down the road. [Analogy] Kind of like buying an A380 to fly 500 miles first, and then discovering it's good for a 11,000 mile flight, as well. A Cessna can fly 500 miles, and is much cheaper - but taking it on long trips is asking for trouble. [/Analogy]
Talking 'bout overkills :)
There is a simple way to redirect port 80 without a webserver:
Bash:
yes | xargs -i printf 'HTTP/1.0 301 Moved Permanently\nLocation: https://www.mydomain.com\n\n' | sudo nc -k -l 80
The example comes from: https://askubuntu.com/questions/1054942/how-to-return-status-301-on-port-80-without-webserver
 
Back
Top