Using Keybase pgp keys for github gpg verification

September 28, 2017

I recently started using the excellent keybase.io app for encrypted chat and all the crypto goodness motivated me to finally set up gpg verified commits on Github. I started with this helpful article: Github GPG + Keybase PGP which I recommend you take a look at, but I had to do a few more steps that I wanted to document.

Installing Keybase

Keybase is easy to install: keybase.io/download. For arch, it was a simple install and setup:

packer -S keybase-bin  
run_keybase

Once installed, you’ll need to start up the app and create an account if you haven’t already.

Generate a key

Generate a new key with keybase, and upload it to your profile. Alternatively, use keybase pgp select to use your an existing key. To use this key for Github verified commits, it will need to have the same email as on your Github account.

$ keybase pgp gen

Export your keybase secret key to your gpg keyring:

$ keybase pgp export -s -p | gpg --allow-secret-key-import --import --

List keys in your gpg keyring and locate your keybase key

$ gpg --list-secret-keys --keyid-format LONG
/home/jay/.gnupg/pubring.kbx
----------------------------
sec   rsa4096/C17228D898051A91 2017-01-30 [SC]
      326DA75610069B5DECA8D2DDC17228D898051A91
uid                 [ultimate] Jay Baker 
ssb   rsa4096/7C87801D5E56F673 2017-01-30 [E]

sec   rsa4096/C24CD98AB0900706 2017-09-28 [SC] [expires: 2033-09-24]
      F21FC721B22B0C176BAFBE35C24CD98AB0900706
uid                 [unknown] Jay Baker 
uid                 [unknown] Jay Baker 
ssb   rsa4096/4599729752E8D5C4 2017-09-28 [E] [expires: 2033-09-24]

I have two keys here, the second one is the one I made with keybase pgp gen. We want to grab the keyid string from the second line:

sec   rsa4096/C24CD98AB0900706 2017-09-28 [SC] [expires: 2033-09-24]
C24CD98AB0900706=

Let’s set a trust level for our key. Since we just made it, we can give it full trust.

$ gpg --edit-key C24CD98AB0900706
gpg (GnuPG) 2.2.1; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

sec  rsa4096/C24CD98AB0900706
     created: 2017-09-28  expires: 2033-09-24  usage: SC  
     trust: unknow      validity: unknown
ssb  rsa4096/4599729752E8D5C4
     created: 2017-09-28  expires: 2033-09-24  usage: E   
[unknown] (1). Jay Baker 
[unknown] (2)  Jay Baker 

gpg> trust

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y

sec  rsa4096/C24CD98AB0900706
     created: 2017-09-28  expires: 2033-09-24  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/4599729752E8D5C4
     created: 2017-09-28  expires: 2033-09-24  usage: E   
[ultimate] (1). Jay Baker 
[ultimate] (2)  Jay Baker 

gpg> quit

Now we can decide to do global or per repository signing with git. This step is optional, you can always manually sign commits with git -S or git --gpg-sign on a per commit basis.

$ git config commit.gpgsign true

This is per repository, add a --global flag after config if you want to enable gpg signing globally for git. If we change our minds later and want to disable signing just run the same command but with false. Also, if want to do just do a single commit without signing:

$ git --no-gpg-sign commit

Now let’s tell git which gpg key to use:

$ git config user.signingkey C24CD98AB0900706 # per repository

Again, add a --global flag if you want.

To verify that our commit worked:

$ git log --show-signature
commit 1f10113fadeae03fd8de870fb18c8563d0b3c602 (HEAD -> master)
gpg: Signature made Thu 28 Sep 2017 17:23:20 EDT
gpg:                using RSA key F21FC721B22B0C176BAFBE35C24CD98AB0900706
gpg: Good signature from "Jay Baker " [ultimate]
gpg:                 aka "Jay Baker " [ultimate]
Author: Jay Baker 
Date:   Thu Sep 28 17:23:20 2017 -0400
	detailed commit message goes here

Add our key to Github

Finally, we need to add our key to Github. Remember, your key will need the same email as your Github user email. You can add more email addresses to your key with gpg --edit-key and then the adduid command.

Let’s get our public key from keybase:

$ keybase pgp export
-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFnNUb8BEAC3RGNiW3AYUIxAsrQBRfclM65naI/xGlvRju6b5tuoZ33Qbvnq
...
WB+E6/rYlZG4Vdk2W1bTk0R2iAVHoamZD0PmJAkv46SiuHqeyOdBGAGsgdVo1FGa
Gw==
=sE0m
-----END PGP PUBLIC KEY BLOCK-----

Copy that, and head to https://github.com/settings/keys, click “New GPG key”, paste it in, then click “Add GPG key” to save it. verifi

That’s it! You can now have verified commits on Github with your keybase pgp key!

Tags

Minimal, Clean Conky

January 26, 2016

Conky is a lightweight system monitor for X. It’s a great way to display all sorts of helpful information right on your desktop. It’s somewhat similar to Rainmeter (for Windows) or GeekTool (for OSX), but it’s far more powerful. Unfortunately, as with many things in Linux, powerful can mean hard to configure. In this guide, I’ll go over my ~/.conkyrc for a minimal, clean conky configuration that provides useful info without polluting your desktop with clutter.

desktop screenshot with conky

My current desktop with conky. I’m on Fedora 23 with Gnome for a desktop environment.

Conky

First, let’s install Conky. It’s available in the official repos of most distributions.

pacman -S conky          #For Arch
apt-get install conky    #For Debian/Ubuntu
dnf install conky        #For Fedora 23, use yum for older versions

Note that configuring Conky with your desktop environment/distribution may require additional packages to be installed. If you’re using Gnome on Fedora 23, then the .conkyrc I provide below will work for sure. If you’re using another desktop environment with Arch, check out the Arch wiki on conky. It provides tons of details on configuring conky, and even lots of little mini guides for how to cusomize your .conkyrc

If your distro doesn’t have a conky package, or if you want to compile it yourself, instructions can be found here.

Once installed, you could start using the default configuration right away. Just start conky with conky &. The default configuration is fine, but didn’t suit my tastes. Conky will look in ~/.conkyrc for any user configurations, so that’s where we’ll put ours.

.conkyrc

Our minimal, clean .conkyrc:

# - Conky settings
update_interval 1
total_run_times 0
net_avg_samples 1
cpu_avg_samples 1
imlib_cache_size 0
double_buffer yes
no_buffers yes

# - Text settings
use_xft yes
xftfont Sans:size=12
override_utf8_locale yes
text_buffer_size 2048

# - Window specifications 
own_window_class Conky
own_window yes
own_window_type normal
own_window_argb_visual yes
own_window_argb_value 255
own_window_transparent yes
own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager
alignment top_right
gap_x 40
gap_y 40
minimum_size 300 550
maximum_width 550
default_bar_size 550 8

# - Graphics settings
draw_shades no
default_color cccccc
color0 white
color1 E07A1F
color2 white


TEXT
Kernel: ${alignr} ${execi 5000 uname -r | sed "s@.fc.*.x86_64@@g" }
Uptime: ${alignr}${uptime}

CPU1: ${cpu cpu1}%${alignr}CPU2: ${cpu cpu2}%
CPU3: ${cpu cpu3}%${alignr}CPU4: ${cpu cpu4}%
Temp: ${alignr}${acpitemp}°C

Memory: ${mem} ${alignr}${membar 8,60}
Disk: ${diskio}${alignr}${diskiograph 8,60 F57900 FCAF3E}
Battery: ${battery_percent BAT0}% ${alignr}${battery_bar 8,60 BAT0}

# Processes
Processes: ${alignr}$processes
Highest: ${alignr 40}CPU${alignr}RAM
${voffset -11.5}${hr 1}
${voffset -4}${top name 1} ${goto 124}${top cpu 1}${alignr }${top mem 1}
${voffset -1}${top name 2} ${goto 124}${top cpu 2}${alignr }${top mem 2}
${voffset -1}${top name 3} ${goto 124}${top cpu 3}${alignr }${top mem 3}
${voffset -1}${top name 4} ${goto 124}${top cpu 4}${alignr }${top mem 4}

${voffset -4}SSID: ${alignr}${wireless_essid wlp3s0}
${voffset -4}Local IP: ${alignr}${addr wlp3s0}
${voffset -4}External IP: ${alignr}${execi 600 curl https://icanhazip.com}

${voffset -4}hal:${alignr}${execi 600 /home/user/bin/pingtest server1.com}
${voffset -4}helper:${alignr}${execi 600 /home/user/bin/pingtest server2.com}

Just open up your favorite text editor and paste in the above. Save the file as ~/.conkyrc. To test it, start conky with conky &. You should now see my conky configuration on your desktop!

A close up view of my conky configuration.

A close up view of my conky configuration.

Conky Config Explained

The first few sections deal with Conky’s appearance and position on the screen. If you want to change colors or positioning, poke around in here. Otherwise, let’s move on to TEXT section, this is the stuff that actually gets displayed on the screen.

First, let’s take a look at this line, and break everything down:

Kernel: ${alignr} ${execi 5000 uname -r | sed "s@.fc.*.x86_64@@g" }

On Fedora, uname -r returns this: 4.2.7-300.fc23.x86_64, which is more info than I want. I already know I’m on 64 bit Fedora 23. It’s just the kernel version that I want displayed. So we use sed to replace fc.*x86_84 with nothing, so we’re just left with the kernel version that precedes it. The * here is a regular expression meaning, any number of any characters. That way, when I upgrade to Fedora 24, I won’t have to change my conkyrc.

Of course, conky has a built in way to display the kernel version, with ${kernel}. I could just use this instead:

Kernel: ${alignr} ${kernel}

But just like uname -r, it would display the Fedora version and whether we have a 32 or 64 bit kernel: 4.2.7-300.fc23.x86_64. If you’re happy with what ${kernel} displays for your distribution, then just leave it. If you want to display just the kernel version, then modify the sed command as needed. sed is a very powerful tool, but it can be a little daunting at first. If you’re new to it, then check out the wikipedia entry for more info.

Let’s turn our attention to the networking section, as you may need to make some changes here:

${voffset -4}SSID: ${alignr}${wireless_essid wlp3s0}
${voffset -4}Local IP: ${alignr}${addr wlp3s0}
${voffset -4}External IP: ${alignr}${execi 600 curl https://icanhazip.com}

${wireless_essid wlp3s0} will display the SSID (wireless network name) of whatever network the wlp3s0 interface is connected to. If your wireless interface is named differently, you will need to change this value. To find your wireless interface name, simply issue ip addr:

It will return information on all of your network interfaces, but we’re just looking for your wireless interace:

2: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
...

To get the local IP address, we use ${addr wlp3s0}. Again replace wlp3s0 with whatever your interface name is. Note that if you have a wired connection, it will not be displayed here. You’ll have to use ip addr to find the name of your wired interface and put it here.

There are several ways to get your external IP address, but I find that the simplest is to do curl https://icanhazip.com. Thus, our conky code is ${voffset -4}External IP: ${alignr}${execi 600 curl https://icanhazip.com}.

Finally, we come to the handy little lines that show whether my servers are up.

${voffset -4}hal:${alignr}${execi 600 /home/user/bin/pingtest server1.com}
${voffset -4}helper:${alignr}${execi 600 /home/user/bin/pingtest server2.com}

/home/user/bin/pingtest is a super simple bash script that pings whatever url you give it and returns “Up” if it gets a response and “Down” if it doesn’t. Save this script to your /home/user/bin:

#!/bin/bash

if ping -c 1 -W 2 "$1" > /dev/null; then
echo "Up"
else
echo "Down"
fi

Alternatively, you could put the script anywhere in your $PATH. You can always check your $PATH environment variable with:

$ echo $PATH
/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/user/bin

If /home/user/bin isn’t part of your $PATH environment variable, you can add it by adding this line to your ~/.profile:

export PATH=$PATH:/home/user/bin

If you don’t have any servers to monitor, just remove those lines or comment them out.

And that’s all there is to my conky set up. Conky is a very powerful tool and can be used to display all sorts of information. I like mine simple and clean, but feel free to go crazy. There are tons of really slick conky configs out there.

Tags

Arch linux + emby + Kodi + nginx: The Ultimate Media Server

November 30, 2015

I’ve been using Arch Linux as my media server/htpc for several years now and it’s been incredibly reliable. Some people prefer a versioned distribution with an LTS release for something like a media server, but I want the freshest packages, and I don’t want to deal with the headache of upgrading/reinstalling when an LTS release outlives its usefulness or won’t allow me to get the new packages I want. Recently, I decided to give emby a try, after hearing about it on one of my favorite podcasts, the Linux Action Show.

This guide assumes you have a working Arch Linux installation. If you’re starting completely from scratch, then consult the excellent Arch Beginner’s Guide. And if you don’t have time to go through a full Arch installation (thought I highly recommend it as you’ll learn a lot about Linux in the process), then you can always just go with Antergos, which is basically Arch with a nice installer and some sane defaults.

emby

emby is a media server that can manage and stream your movies, tv shows, music, home videos to a plethora of devices. It’s a lot like Plex, but it comes with all of the best features free to use. It works with a server/client set up. You install and configure emby server, in our case on an Arch Linux rig (though they have support for many distributions and operating systems), and then access it from a client like your Web browser, a DLNA client like a PlayStation 4, Kodi media center, the Android or iOS apps, or any number of other options. Check out their download page to see what I mean. Their cross-platform support is some of the best for any app I’ve seen in a while, let alone a media server.

emby

Installing emby

First, let’s create a user account for emby:

sudo useradd -r -s /bin/false emby

emby is available in the community repos. Install it with:

sudo pacman -S emby-server

Now that it’s installed, we’ll need to start the emby service with systemd:

sudo systemctl start emby-server
sudo systemctl enable emby-server

Check to make sure the emby-server service started properly:

sudo systemctl status emby-server
● emby-server.service - Emby brings together your videos, music, photos, and live television.
   Loaded: loaded (/usr/lib/systemd/system/emby-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2015-11-29 15:42:16 EST; 23h ago
 Main PID: 15379 (mono)
   CGroup: /system.slice/emby-server.service
           └─15379 /usr/bin/mono /usr/lib/emby-server/MediaBrowser.Server.Mono.exe -programdata /var/lib/emby -ffmpeg /usr/bin/ffmpeg -ffprobe /usr/bin/ffprobe

You should see something like that. If you get errors, check the logs with sudo journalctl -xe. When I first tried to start the service, I got an error saying that emby didn’t have permission to write to the /var/lib/emby/logs directory. If you get an error like that, you can fix it with sudo chown -R emby:emby /var/lib/emby

Once we’ve got the emby-server service started, let’s continue with the set up by pointing your web browser to http://localhost:8096. If you did everything correctly, you’ll see the emby welcome screen. The emby set-up wizard is really very easy to use, so I won’t go over every step, but you basically create a user account and tell emby where your media folders are and what kind of media they contain.

NOTE: emby can only scrape and recognize your media if it is named properly. It supports a number of naming conventions, so you’ll just have to pick one and make sure your media conforms to it. My media was already named properly, so I didn’t have to worry about this step, but there are a few tools, like filebot that can help you get your files in order.

Once you get through the initial set up, take a while to look through the options in the ‘Manager Server’ section:

Emby Manage Server Screen

There are a lot of options you can configure here. Familiarize yourself with all of the options and decide which are best for your set up. You can choose what libraries each user has access to, the max bitrate each user can stream at (useful if you live under the tyrannical rule of Comcast’s data caps), and where to save metadata files (fanart, posters, etc.).

Right now, you have a working emby server set up, you could just open port 8096 in your firewall and access your server from the web with https://your.external.ip.address:8096. emby even has built in ssl (https) support. But I already have nginx web server set up with ssl, and I don’t want the additional security risk and hassle of opening another port in my router.

nginx

nginx (pronounced “engine X”) is an awesome web server that’s fast, configurable and reliable. The configuration can be a little tricky to new users and users coming from Apache, but I actually prefer the nginx syntax now that I’m used to it. In any case, you can just cut and paste my configuration below to simplify the process.

Basically, nginx is going to act as a go-between for clients wanting to access it (and other web services we might be running, like transmission, subsonic, or owncloud). That means, we really only need to have 3 ports open in our router/firewall: http (80), https (443), and ssh (22). It also means, we can set up custom sub-domains, like emby.your-domain.com.

Installing and Configuring Nginx

First, install nginx with:

sudo pacman -S nginx

Once it’s installed, we’ll need to create a configuration file for our emby reverse proxy. Create /etc/nginx/conf.d/emby.conf in your text editor of choice and paste in my configuration below, making sure to change the server name to your own:

server {
    listen 80;
    return 301 https://$host$request_uri;
}

server {
        server_name emby.yourdomain.com; #change this to your domain!
        listen 443 ssl spdy;

        ssl_certificate           /etc/nginx/certs/emby.crt;
    	ssl_certificate_key       /etc/nginx/certs/emby.key;
	ssl_prefer_server_ciphers       On;
        ssl_protocols                   TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers                     'AES256+EECDH:AES256+EDH:!aNULL';
        resolver                        8.8.8.8 8.8.4.4 valid=300s;
        resolver_timeout                5s;
        ssl_stapling_verify             on;
        keepalive_timeout               180;
        add_header                    Strict-Transport-Security max-age=31536000;
        client_max_body_size 1024M;

        location / {
                # Send traffic to the backend
                proxy_pass http://127.0.0.1:8096;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
                proxy_set_header Host $host;
                proxy_set_header X-Forwarded-Proto $remote_addr;
                proxy_set_header X-Forwarded-Protocol $scheme;
                proxy_redirect off;

                # Send websocket data to the backend as well
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";
        }
}

There’s a lot going on there, but this configuration takes any http requests to emby.yourdomain.com and redirects them to use https. It then creates a proxy so that any requests to emby.yourdomain.com/ get redirected to http://127.0.0.1:8096. That’s how we can keep the 8096 port closed. The emby clients will connect to nginx, which will then talk to the local emby server at 127.0.0.1:8096. Any data you send to or from your emby server will be encrypted and secured with ssl.

ssl

Our nginx proxy isn’t ready to go yet, we still need to create some self-signed certificates. Now, you could of course purchase an ssl certificate, but that’s sort of overkill for a home media server if you ask me … But if you already have some certs to use, put them in /etc/nginx/certs, Otherwise, let’s make some now.

sudo mkdir /etc/nginx/certs/
cd /etc/nginx/certs/
sudo openssl req -new -x509 -nodes -newkey rsa:4096 -keyout emby.key -out emby.crt -days 1095
sudo chmod 400 emby.*

The -days 1095 will set how long the certificate will be valid, adjust it to suit your needs. If you don’t understand the other settings, don’t mess with them. Running the openssl command will prompt you for some info, you can leave these blank if you want or fill them in, doesn’t really matter. chmod 400 emby.* will set some secure permissions on the certs so that other user accounts can’t mess with them.

Testing our nginx configuration

Once you’ve completed the above steps, let’s test our new set up.

sudo systemctl start nginx
sudo systemctl enable nginx

With the nginx service started, point your browser to emby.yourdomain.com. You should be redirected to https://emby.yourdomain.com and be presented with the emby server log in screen. Your browser may give you a warning about an untrusted certificate, but that’s just because we’re using a self-signed certificate. We just created the damn things so we know it’s safe. Add a security exception for the cert in Firefox or select Proceed anyway in Chrome. Either way, you shouldn’t be bothered with this message again (unless the certificate is changed).

emby login screen

Success! You’ve now got your own personal Netflix. You’ve got access to all of your media wherever you have an internet connection in a secure, encrypted way. You can make user accounts for your friends and family to use and give them an easy to remember url.

But what about in your home theater? emby’s web client is pretty slick, but no one wants to open up a web browser on their tv to view their local media files. That’s where kodi comes in.

kodi

Kodi (formerly known as xbmc) is open source home theater software. It provides a stylish, themeable, easy to use UI to display and organize all of your media. It has great remote control apps for Android and iOS and supports IR devices as well. If you’re serious about creating the ultimate home theater/media server set up, then at the very least you should look into it.

Kodi can play our local media just fine without emby, and in fact that was the set up I had for years. It works great in this regard and it can handle just about any file type you can throw at it. Guests were always amazed by my kodi set up and how I controlled it with my smart phone (using the excellent yatse app). Where kodi falls short is that it while it can do streaming over DLNA, it doesn’t do transcoding. There’s also not a simple way to keep track of your played position/status if you have multiple kodi instances throughout the house. emby can solve these problems in an elegant way by acting as the backend for Kodi, while letting us keep the beautiful and functional Kodi UI for our big screen.

NOTE: The emby add-on for Kodi will replace your existing Kodi metadata database completely. If you have a working Kodi install, then be sure to make a back up of any user data before proceeding. You have been warned.

That being said, I have found emby to be much better at scraping titles and downloading all of the proper meta images (posters, fan art, banners, logos, etc.) Even though I spent years getting my Kodi database just right, a few minutes of work with emby and I’m happy to throw all that away. I’ve still got my backup of course, but I doubt I’ll ever use it. emby really is that good.

Install Kodi with:

sudo pacman -S kodi

As always, the Arch wiki has a ton of useful info on this package. Be sure to read up on the entry for Kodi if you’re unfamiliar with it.

Installing the emby add-on for Kodi

emby provides their own Kodi repository for their add-on, so installing it is just as simple as installing any other Kodi add-on. Download their repository .zip file.

wget http://www.mb3admin.com/downloads/addons/xbmb3c/kodi-repo/repository.emby.kodi-1.0.2.zip

Then open up Kodi and navigate to System > Settings > Add-ons > Install from zip file.

kodi system settings

kodi install from zip

Once the repo is installed, you’ll need to install the emby add-on itself. In Kodi, navigate to Add-ons > Video add-ons > Emby

install emby add on

Launch the add-on by going to Video add-ons > Emby. If your emby-server service is running, then it should auto detect emby-server for you. Just log in and emby will perform a metadata sync with emby-server. Again, this will completely replace any existing metadata that Kodi has collected. There are a few options you can set in the emby add-on, but if you’re running Kodi on the same machine that’s running the emby server the defaults should work for you.

The emby website has some great screen shots of the emby add-on being used with different Kodi skins to give you a taste of the awesomeness to come:

emby kodi screenshots

The emby add-on for Kodi doesn’t yet support streaming over the web, so if you want to run Kodi with the emby add-on set up on a different machine, you’ll have to share the media over the network using samba or nfs and then set up path substitution in emby, so that kodi can find the files on the network to play. You’ll still be able to track played position/status and see all of your metadata, etc., it just requires some more set up. I haven’t had to set that up yet, so I’ll save that for another guide.

In any case, you should now have a bad ass media center/server set up on Arch Linux!

Tags

OpenVPN in a Docker container on CentOS 7 with SystemD support

September 10, 2015

In this guide, we will set up Docker on a Digital Ocean CentOS 7 droplet, then set up OpenVPN. We’ll also discuss how to create a custom SystemD service file so that we can manage our container with systemctl commands.

VPNs (virtual private networks) are a great way to secure your internet traffic over untrusted connections. It can provide easy access to your home network file server or other machines that you don’t want to expose to the internet directly. Also, you can use it to circumvent blocked sites or services on a company network, or it can be used as a proxy to access restricted content in your country (like Hulu or Netflix).

Docker allows you to easily deploy and manage Linux containers: isolated, virtualized environments. Their isolation makes them secure and easy to manage, especially for developers who can develop their container, without worrying about which distro or even OS it will be deployed to.

This guide assumes you already have a CentOS 7 server set up. I recommend using Digital Ocean, but any provider which gives you root access will work as well. This guide should also work with any distribution that uses SystemD. I liked the set up so much, that I implemented it on my home Arch server as well.

Docker

First, let’s install docker:

sudo yum -y update
sudo yum -y install docker docker-registry

Once that’s done, we’ll start Docker and then enable it to start at boot:

sudo systemctl start docker.service
sudo systemctl enable docker.service

If you don’t want to have to type sudo every time you use the docker command, then you’ll have to add your user to the group ‘docker’. Do so with:

sudo usermod -aG docker $USER
newgrp docker

The second command will make your current session aware of your new group.

VPN

Now that Docker is up and running, we’ll need to set up busybox and OpenVPN. busybox is a super minimal docker image designed for embedded systems. We just want it for its small footprint. All we’re running is a VPN, so theres no need for extra fluff.

Get it set up with:

sudo docker run --name dvpn-data -v /etc/openvpn busybox
docker run --volumes-from dvpn-data --rm kylemanna/openvpn ovpn_genconfig -u udp://$DOMAIN:1194
docker run --volumes-from dvpn-data --rm -it kylemanna/openvpn ovpn_initpki

The first command pulls the busybox image and creates a new container called ‘dvpn-data’. The second command starts a container that will hold the configuration files and certificates. Replace $DOMAIN with the IP or domain of your server. Take note that port 1194 will need to be opened in your firewall. The thrid command will generate Diffe-Hellman parameters. It will take a long time so just be patient.

To open the required port in firewalld, issue the following command:

sudo firewall-cmd --permanent --zone=public --add-port=1194/udp

Now we need to create the credentials that will allow your client to connect to the VPN.

sudo docker run --volumes-from dvpn-data --rm -it kylemanna/openvpn easyrsa build-client-full $CONNECTION_NAME nopass
sudo docker run --volumes-from dvpn-data --rm kylemanna/openvpn ovpn_getclient $CONNECTION_NAME > $CONNECTION_NAME.ovpn

Replace $CONNECTION_NAME with whatever you want to call your VPN connection. I named mine after my server name. You will be asked to create a password during the process, just pick one. It will take a while to do some crypto stuff, but eventually you’ll get an ovpn file in your current director. This is what will allow you to add the connection to your client. You will need to securely move this to the machine that will be connecting to your vpn. rsync or scp are good options. You could even use a usb thumb drive.

Since the first machine I used this VPN for was a Mac I use at work, I chose Tunnelblick for my client. After it’s installed, double clicking on the ovpn file is all the set up that was needed to add the connection to Tunnelblick on Mac. Consult your client’s documentation if this doesn’t work for you.

Manage your new container with SystemD

Now that we’ve got all of the docker stuff out of the way, let’s create a custom systemd service file so we can manage our new container with the systemctl command. SystemD service files are like init or Upstart scripts, but can be more robust and even take the place of Cron.

In CentOS 7 and Arch, these files are kept in /etc/systemd/system/ so we’ll put our’s their too. Fire up your text editor of choice, for me it’s sudo vim /etc/systemd/system/dvpn.service, and paste in the following:

[Unit]
Description=OpenVPN Docker Container
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run --name vpn --volumes-from dvpn-data --rm -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
ExecReload=/usr/bin/docker stop && /usr/bin/docker run --name vpn --volumes-from dvpn-data --rm -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
ExecStop=/usr/bin/docker stop vpn

[Install]
WantedBy=local.target

There’s a lot going on here, so let’s break it down. The stuff in the [Unit] section is straigtforward enough. We give our service file an arbitrary description. The Requires=docker.service and After=docker.service mean that this service won’t start until after the docker service has started.

The Restart=always means that our service will restart if it fails. The ExecStart= tells systemd what to run when we start the service. Let’s break this command down further, to help you understand what’s going on here:

You can find more info about the docker run command in the Docker documentation, it has tons of options. Of course, you could just check the man page for it, with man docker run.

Finally, the [Install] section basically allows us to enable the service to be enabled to start at boot. You can read more about systemd service files in this excellent tutorial: Understanding Systemd Units and Unit Files.

Now that our service is created we can start it and enable it to load at boot with:

sudo systemctl start dvpn.service
sudo systemctl enable dvpn.service

You can also check its status with sudo systemctl status dvpn.service

And that’s it! You now have a SystemD managed, Docker controlled, OpenVPN set up. Enjoy!

UPDATE:I tried following my own guide to create a vpn on my home Arch Linux rig and ran into some problems. You might get iptables errors when attempting to start the dvpn service file created above.

┌─[jay@hal]─(~) 
└─[13:19]$ docker run --name vpn --volumes-from ovpn-data --rm -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
Error response from daemon: Cannot start container d8afcbc7069b0530893779c9abf4d10aa73ab53f820c310a8baf2b956f79877c: failed to create endpoint vpn on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p udp -d 0/0 --dport 1194 -j DNAT --to-destination 172.17.0.2:1194 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)

It is possibly due to either the xt_conntrack kernel module not loading or because you simply need to restart the firewalld and docker daemons to reload the iptables rules. Additional info can be found here. Try restarting the daemons first with:

sudo systemctl restart firewalld
sudo systemctl restart docker

If that doesn’t work, try loading the kernel module:

sudo modprobe xt_conntrack

Tags

Secure Your WordPress With a Free SSL Certificate in Apache on CentOS 7

July 3, 2015

It is simple enough to use a self-signed certificate to encrypt traffic to your site with SSL, but if you have a WordPress blog or any site that might see lots of visitors, then a self-signed certificate is not an option: How many average users are going to proceed to your site with a warning from their Web browser about an untrusted connection? This guide will show you start to finish how to get a free SSL certificate from StartSSL, install it on your server, configure apache, and set up WordPress to use https.

All of the information I’m using is from these guides:

If you get stuck, it might help to reference one of these guides. My set up is a CentOS 7 Digital Ocean droplet with apache and WordPress, but a lot of these steps should work for other distributions. Also, keep in mind that the free certificate offered by StartSSL is for non-commercial use only.

What you’ll need

StartSSL

Open up Chrome and head to startssl.com. Click on “Express Signup,” fill out the forms and hit continue. Check your email for the verification code. Click the link in the e-mail and you will be asked to generate a private key. Choose “High” for the grade. Once it’s done, click “Install” and Chrome will present you with a pop-up that says it has been successfully installed.

This is not your SSL certificate, it’s just a key that you will use to log in to the StartSSL Web site. Click on “Control Panel” and then “Authenticate.” Chrome will give you a pop-up to authenticate with the site.

Validate your domain

Once you’re in the Control Panel, click on the Validations Wizard tab and select “Domain Name Validation” from the drop-down menu. Choose whichever e-mail you have access to (like postmaster@yourdomain.com).

If you’re using Google Apps for your e-mail provider, you can just create a group called webmaster and give it public access permissions to post to the group. Add yourself to the group and you will get any messages sent to webmaster@yourdomain.com. This is any easy way to get extra addresses forwarding to your main Google Apps account without creating another user.

Check that whatever account you’re using for the validation e-mail and paste in the code.

Create the Certificate

In the Control Panel, click on the “Certificates Wizard” tab. Select “Web Server SSL/TLS Certificate” from the drop-down menu. Hit continue and enter a strong password. You’ll get a text box that contains your key. Copy its contents into your text editor of choice and save the file as ssl.key.

Hit continue and select your recently verified domain. Choose a sub-domain on the next screen. You probably want to pick ‘www’, but it’s up to you. Hit continue and you’ll get another text box, this time containing your certificate. Copy it to your text editor and save it as ssl.crt.

Download the CAs

Click on “Toolbox,” and download the StartCom Root CA and the StartSSL’s Class 1 Intermediate Server CA. Just right-click on the links with those names and hit save as.

Now we need to unencrypt your your private key so that your sever can use it. Do so with:

openssl rsa -in ssl.key -out private.key

You should now have 5 files:

ca.pem
private.key
sub.class1.server.ca.pem
ssl.crt
ssl.key

Note: the private.key file is the unencrypted version of your private key. Make certain that no one has access to it and that you delete it from your local machine once you upload it to the server. It isn’t necessary to upload the ssl.key file to your server. Let’s upload the ones we do need though, using scp:

scp -p 2222 {ca.pem,private.key,sub.class1.server.ca.pem,ssl.crt} user@yourserver.com:/home/user/

In this example, the ssh listening port is 2222, change it to whatever your port is. You can also specify a different destination directory by changing /home/user to whatever you want.

Apache

SSH into your server and let’s get it set up.

$ sudo yum install -y mod_ssl
$ sudo mkdir /etc/httpd/ssl
$ sudo mv {ca.pem,private.key,sub.class1.server.ca.pem,ssl.crt} /etc/httpd/ssl
$ sudo nano /etc/httpd/conf.d/ssl.conf

The first command will install the ssl module for apache, the second creates a directory for your certificate to live in. The third command will move all of your certificate files to your newly created ssl directory. The last will open up the ssl configuration file for apache. Look for this line:

<VirtualHost _default_:443>

Uncomment (delete the # at the beginning of the line) the DocumentRoot and ServerName lines and change example.com:443 to whatever your domain is. It is important that this match what you entered when you created the certificate.

Uncomment these lines as well and change the location of the files to match what’s shown here:

SSLCertificateFile /etc/apache2/ssl/ssl.crt                           
SSLCertificateKeyFile /etc/apache2/ssl/private.key                        
SSLCertificateChainFile /etc/apache2/ssl/sub.class1.server.ca.pem

Once you’re done, save and close the ssl.conf file and open up your site’s configuration file:

$ sudo vim /etc/httpd/sites-enabled/yoursitesname.com.conf

And add these lines before the closing :

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

This will force https for the whole site, so that even if users don’t type out https:// before your address, they will still be protected.

Restart the apache server:

$ sudo systemctl restart httpd

Test that it works by going to https://yourdomain.com. You should see a little lock in the address bar. If you get an Untrusted Connection error, then you probably forgot to change the location of the certificate files from the defaults in the ssl.conf file. If you get a lock symbol, but with a triangular alert symbol, then you’ve got yourself a mixed content warning. No big deal, we’ll fix that in the next step.

WordPress

Log in to your WordPress admin portal and click on “Settings,” and change the “WordPress Address (URL)” from http://yourdomain.com to https://yourdomain.com. Make the same change to the “Site Address (URL)” field as well.

If you’ve got the Mixed Content warning, then you’ve got some work to do. This warning basically means that your Web browser has detected some content on the page that is being fetched with plain old http, meaning it’s not encrypted and secure. This could mean anything, but images you’ve added to posts is a great place to start. Take a look at one of your posts with images and view it in text mode. Scroll down to where your image is and check the html, if it looks like this:<img src="http://yourdomain.com/cat.jpg" ... then that’s probably the problem.

There are a number of ways to fix this. If you have a new site, then you can just click through your posts and add an ‘s’ after http to all of your image tags. If you have hundreds or more images, this could get tedious. This guide: Moving to HTTPS on WordPress has some SQL kung fu that might be able to automate the process for you. <iframe> or <link> tags could also be causing the problem if they are calling http. This stackoverflow post has some more info as well.

Keep an eye out for mixed content warnings on other pages, but otherwise you should be done!

Tags