Secure Your WordPress With a Free SSL Certificate in Apache on CentOS 7

July 3, 2015

It is simple enough to use a self-signed certificate to encrypt traffic to your site with SSL, but if you have a WordPress blog or any site that might see lots of visitors, then a self-signed certificate is not an option: How many average users are going to proceed to your site with a warning from their Web browser about an untrusted connection? This guide will show you start to finish how to get a free SSL certificate from StartSSL, install it on your server, configure apache, and set up WordPress to use https.

All of the information I’m using is from these guides:

If you get stuck, it might help to reference one of these guides. My set up is a CentOS 7 Digital Ocean droplet with apache and WordPress, but a lot of these steps should work for other distributions. Also, keep in mind that the free certificate offered by StartSSL is for non-commercial use only.

What you’ll need

StartSSL

Open up Chrome and head to startssl.com. Click on “Express Signup,” fill out the forms and hit continue. Check your email for the verification code. Click the link in the e-mail and you will be asked to generate a private key. Choose “High” for the grade. Once it’s done, click “Install” and Chrome will present you with a pop-up that says it has been successfully installed.

This is not your SSL certificate, it’s just a key that you will use to log in to the StartSSL Web site. Click on “Control Panel” and then “Authenticate.” Chrome will give you a pop-up to authenticate with the site.

Validate your domain

Once you’re in the Control Panel, click on the Validations Wizard tab and select “Domain Name Validation” from the drop-down menu. Choose whichever e-mail you have access to (like postmaster@yourdomain.com).

If you’re using Google Apps for your e-mail provider, you can just create a group called webmaster and give it public access permissions to post to the group. Add yourself to the group and you will get any messages sent to webmaster@yourdomain.com. This is any easy way to get extra addresses forwarding to your main Google Apps account without creating another user.

Check that whatever account you’re using for the validation e-mail and paste in the code.

Create the Certificate

In the Control Panel, click on the “Certificates Wizard” tab. Select “Web Server SSL/TLS Certificate” from the drop-down menu. Hit continue and enter a strong password. You’ll get a text box that contains your key. Copy its contents into your text editor of choice and save the file as ssl.key.

Hit continue and select your recently verified domain. Choose a sub-domain on the next screen. You probably want to pick ‘www’, but it’s up to you. Hit continue and you’ll get another text box, this time containing your certificate. Copy it to your text editor and save it as ssl.crt.

Download the CAs

Click on “Toolbox,” and download the StartCom Root CA and the StartSSL’s Class 1 Intermediate Server CA. Just right-click on the links with those names and hit save as.

Now we need to unencrypt your your private key so that your sever can use it. Do so with:

openssl rsa -in ssl.key -out private.key

You should now have 5 files:

ca.pem
private.key
sub.class1.server.ca.pem
ssl.crt
ssl.key

Note: the private.key file is the unencrypted version of your private key. Make certain that no one has access to it and that you delete it from your local machine once you upload it to the server. It isn’t necessary to upload the ssl.key file to your server. Let’s upload the ones we do need though, using scp:

scp -p 2222 {ca.pem,private.key,sub.class1.server.ca.pem,ssl.crt} user@yourserver.com:/home/user/

In this example, the ssh listening port is 2222, change it to whatever your port is. You can also specify a different destination directory by changing /home/user to whatever you want.

Apache

SSH into your server and let’s get it set up.

$ sudo yum install -y mod_ssl
$ sudo mkdir /etc/httpd/ssl
$ sudo mv {ca.pem,private.key,sub.class1.server.ca.pem,ssl.crt} /etc/httpd/ssl
$ sudo nano /etc/httpd/conf.d/ssl.conf

The first command will install the ssl module for apache, the second creates a directory for your certificate to live in. The third command will move all of your certificate files to your newly created ssl directory. The last will open up the ssl configuration file for apache. Look for this line:

<VirtualHost _default_:443>

Uncomment (delete the # at the beginning of the line) the DocumentRoot and ServerName lines and change example.com:443 to whatever your domain is. It is important that this match what you entered when you created the certificate.

Uncomment these lines as well and change the location of the files to match what’s shown here:

SSLCertificateFile /etc/apache2/ssl/ssl.crt                           
SSLCertificateKeyFile /etc/apache2/ssl/private.key                        
SSLCertificateChainFile /etc/apache2/ssl/sub.class1.server.ca.pem

Once you’re done, save and close the ssl.conf file and open up your site’s configuration file:

$ sudo vim /etc/httpd/sites-enabled/yoursitesname.com.conf

And add these lines before the closing :

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

This will force https for the whole site, so that even if users don’t type out https:// before your address, they will still be protected.

Restart the apache server:

$ sudo systemctl restart httpd

Test that it works by going to https://yourdomain.com. You should see a little lock in the address bar. If you get an Untrusted Connection error, then you probably forgot to change the location of the certificate files from the defaults in the ssl.conf file. If you get a lock symbol, but with a triangular alert symbol, then you’ve got yourself a mixed content warning. No big deal, we’ll fix that in the next step.

WordPress

Log in to your WordPress admin portal and click on “Settings,” and change the “WordPress Address (URL)” from http://yourdomain.com to https://yourdomain.com. Make the same change to the “Site Address (URL)” field as well.

If you’ve got the Mixed Content warning, then you’ve got some work to do. This warning basically means that your Web browser has detected some content on the page that is being fetched with plain old http, meaning it’s not encrypted and secure. This could mean anything, but images you’ve added to posts is a great place to start. Take a look at one of your posts with images and view it in text mode. Scroll down to where your image is and check the html, if it looks like this:<img src="http://yourdomain.com/cat.jpg" ... then that’s probably the problem.

There are a number of ways to fix this. If you have a new site, then you can just click through your posts and add an ‘s’ after http to all of your image tags. If you have hundreds or more images, this could get tedious. This guide: Moving to HTTPS on WordPress has some SQL kung fu that might be able to automate the process for you. <iframe> or <link> tags could also be causing the problem if they are calling http. This stackoverflow post has some more info as well.

Keep an eye out for mixed content warnings on other pages, but otherwise you should be done!

Tags

Transmission Web Interface Reverse Proxy With SSL Using nginx on Arch Linux

July 1, 2015

Transmission has been my favorite torrent client for years now and one of my favorite features is the excellent Web interface, which let’s you control your torrenting over the web, allowing you to add, pause, etc. torrents when you’re away from whatever rig you have set up for that purpose.

The only problem with its Web interface, is that it just uses unencrypted http. You can password protect the interface, but you’re password is still sent via cleartext … meaning anyone that’s listening in on your connection can see your password or any other data being exchanged between transmission and wherever you’re accessing it from. Let’s fix that!

Note: This guide applies to Arch Linux, but should work for most other distributions, especially if they use systemd.

Transmission

Transmission is available in the official Arch repositories, but there are several packages to choose from: transmission-cli, transmission-remote-cli, transmission-gtk, and transmission-qt. If this installation will be for a desktop machine, you may want to install the gtk or qt versions, but for our purposes, we’re going to go with transmission-cli and transmission-remote-cli. The first one, transmission-cli, will give us the transmission daemon and the web interface. transmission-remote-cli will let us access transmission through a curses based interface that you may find useful. Install them with:

$ sudo pacman -S transmission-cli transmission-remote-cli

Now that we’ve got them installed, we need to configure the daemon to set up the Web interface. You’ll need to start the transmission daemon or GUI version at least once to create an initial configuration file. Do so with:

$ sudo systemctl start transmission

Depending on which user you run transmission as, there’s a different location for the config file. If you’re running transmission as the user transmission (which is the default), then your config will be located at /var/lib/transmission/.config/transmission-daemon/settings.json. If you’ve set it to run as your user, then the config folder will be located at ~/.config/transmission-daemon/settings.json. If you’re using the gtk or qt version of transmission, then your config files are located at ~/.config/transmission.

Open it up in your editor of choice and look for these lines:
(Note: they do no appear in this order, I just pasted in only the lines that are relevant. You can read more about what each line does here.)

"download-dir": "/home/user/Torrents", #Set this to wherever you want your torrents to be downloaded to.

"peer-port": 51413, #This is the port that transmission will use to actually send data using the bittorrent protocol.

"rpc-enabled": true, #This enables the Web interface. Set it to true.

"rpc-password": "your_password", #Choose a good password.

"rpc-port": 9091, #Change the port if you want, or just make note of the default 9091.

After editing the config file, restart transmission so the changes will take effect with:

$ sudo systemctl restart transmission

Test that the Web interface is working by going to http://your.ip.address:9091/transmission/web/ … note that the trailing / after web is required, omitting it will prevent the interface from loading.

Now that the transmission daemon is started, you can access it via the command line with transmission-remote-cli. It is a perfectly functional way to control transmission, and assuming you have SSH set up securely, then it’s safe and encrypted. I like to have it installed in case I mess up my nginx set up somehow, but still need to access the transmission daemon remotely.

nginx

nginx is an http server, like apache, that can be used to serve up Web pages, or in this case, do a reverse proxy.

First, install it with:

$ sudo pacman -S nginx

Now we need to set up an ssl certificate:

$ cd /etc/nginx
$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout /etc/nginx/cert.key -out /etc/nginx/cert.crt

You will be prompted to enter some info. Keep in mind that his will be visible to anyone. The -days 365 will set how long the certificate will be valid. Change this if you like. This command will create two files: cert.key and cert.crt, which we will later reference in our nginx.conf

Let’s get nginx set up. Open /etc/nginx.conf and add the following line:

include /etc/nginx/conf.d/*.conf;

In some distributions, it might be there by defuault, but it’s not in Arch. Now we need to add a .conf file for our ssl reverse proxy:

$ cd /etc/nginx
$ sudo mkdir conf.d
$ sudo nano conf.d/transmission.conf

Paste in the following:

server {
    listen 80;
    return 301 https://$host$request_uri;
}

server {

    listen 443;
    server_name yourdomain.com;

    ssl_certificate           /etc/nginx/cert.crt;
    ssl_certificate_key       /etc/nginx/cert.key;

    ssl on;
    ssl_session_cache  builtin:1000  shared:SSL:10m;
    ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
    ssl_prefer_server_ciphers on;


    location / {

      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;

      # Fix the "It appears that your reverse proxy set up is broken" error.
      proxy_pass          http://localhost:9091/;
      proxy_read_timeout  90;

      proxy_redirect      http://localhost:9091/ https://yourdomain.com;
    }
  }

That might seem complicated, but there are actually only a few things you’ll need to modify. Change server_name yourdomain.com to whatever your domain is. You could also use an IP address here if you have a static IP. The ssl_certificate /etc/nginx/cert.crt; points to where your certificate is, if you named it something else in the earlier step, then edit this line and the next one. If you changed the port that transmission listens on for the Web interface, then be sure to change this line: proxy_pass http://localhost:9091/; to reflect it. Finally, put your domain and port on this line: proxy_redirect http://localhost:9091/ https://yourdomain.com;. Save the file and restart the nginx server:

$ sudo systemctl restart nginx

You should now be able to access the transmission Web interface by going to https://yourdomain.com/transmission/web/. If your browser gives you a warning about an untrusted connection, then you know it works. Add an exception and continue. Your browser gives you that warning because the certificate isn’t signed by a third, trusted party. Don’t worry though, the connection is just as encrypted which is all we’re going for here anyway.

That’s it, you’re done! From here you could add reverse proxies to other local services, like kodi‘s web interface.

Also, now that you’re accessing transmission trough https (port 443), you can close the transmission port (9091) in your firewall to further lock down your system. Be sure to keep ports 80 and 443 open though.

Tags

Set Up ufw and fail2ban on Arch Linux

June 25, 2015

I recently set up ufw and fail2ban on my Arch Linux home server. It’s fairly simple, but I found that most of the guides I followed left a few things out or didn’t quite work with Arch. There are lots firewall options for Arch, but I went with ufw, as I had recently set it up on my droplet, according to this guide: How To Setup a Firewall with UFW on an Ubuntu and Debian Cloud Server and enjoyed the simple syntax.

ufw: Uncomplicated Firewall

Uncomplicated Firewall serves as a simple front end to iptables and makes it easier to set up a firewall. It’s available in the Arch repos, so let’s start by installing it:

# pacman -S ufw

Now that it’s installed, let’s configure it:

$ sudo ufw enable
$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing

These commands will turn on ufw and deny all incoming and allow all outgoing traffic. These rules won’t take effect until you restart ufw, but to be safe, the first incoming traffic we’re going to allow is our sshd port. That way, we won’t accidentally lock ourselves out once we restart ufw

$ sudo ufw allow 2222/tcp

This assumes that your sshd listening port is 2222. It’s important to change the listening port from the default 22 to help discourage brute force attacks. Since you’re reading a guide on setting up ufw and fail2ban, you probably already have your sshd configured to be safer, but if not then read up on how to on the Arch wiki.

Now open up any other ports your server might need:

$ sudo ufw allow 4040/tcp
$ sudo ufw allow 80/tcp
$ sudo ufw allow 443/tcp

These commands opened up the ports for the Subsonic music streamer and for http/https for Owncloud. If you have other services that need special ports open, allow them in the same fashion as above. Now that we’ve got all of the ports open that you will need (and double checked that you opened your sshd listening port), let’s get ufw going.

$ sudo ufw disable
$ sudo ufw enable
$ sudo systemctl enable ufw.service
$ sudo ufw status

The first two commands will restart ufw. Then we enable it to start on boot using systemd. Finally, we check to see if all of our efforts have worked. If it did, you should see something like this:

┌─[jay@hal]─(~)
└─[09:11]$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
80/tcp                     ALLOW       Anywhere
443/tcp                    ALLOW       Anywhere
4040/udp                   ALLOW       Anywhere

fail2ban

Fail2ban watches for incoming ssh requests and takes note of IPs that fail too many times to log in, and automatically bans them. This is a great way to stop trivial attacks, but doesn’t provide full protection. If you haven’t already, please set up SSH keys.

First, let’s install fail2ban:

# pacman -S fail2ban

Now we’ll need to create some custom rules and definitions for it, so it can play nice with ufw. Create a jail file for ssh: sudo nano /etc/fail2ban/jail.d/sshd.conf and insert the following:

[ssh]
enabled = true
banaction = ufw-ssh
port = 2222
filter = sshd
logpath = /var/log/auth.log
maxretry = 3

Be sure to change the port number to whatever port sshd listens to on your machine. Save it and let’s move on. Create another file: sudo nano /etc/fail2ban/action.d/ufw-ssh.conf and insert the following:

[Definition]
actionstart =
actionstop =
actioncheck =
actionban = ufw insert 1 deny from  to any app OpenSSH
actionunban = ufw delete deny from  to any app OpenSSH

Now we simply start the service, enable it to load at boot, and check its status to make sure it’s working:

$ sudo systemctl start fail2ban
$ sudo systemctl enable fail2ban
$ sudo systemctl status fail2ban

And that’s it! We now have ufw protecting us with a firewall and fail2ban banning stupid script kiddies and bots from bugging us.

Tags