Using Keybase pgp keys for github gpg verification

September 28, 2017

I recently started using the excellent app for encrypted chat and all the crypto goodness motivated me to finally set up gpg verified commits on Github. I started with this helpful article: Github GPG + Keybase PGP which I recommend you take a look at, but I had to do a few more steps that I wanted to document.

Installing Keybase

Keybase is easy to install: For arch, it was a simple install and setup:

packer -S keybase-bin  

Once installed, you’ll need to start up the app and create an account if you haven’t already.

Generate a key

Generate a new key with keybase, and upload it to your profile. Alternatively, use keybase pgp select to use your an existing key. To use this key for Github verified commits, it will need to have the same email as on your Github account.

$ keybase pgp gen

Export your keybase secret key to your gpg keyring:

$ keybase pgp export -s -p | gpg --allow-secret-key-import --import --

List keys in your gpg keyring and locate your keybase key

$ gpg --list-secret-keys --keyid-format LONG
sec   rsa4096/C17228D898051A91 2017-01-30 [SC]
uid                 [ultimate] Jay Baker 
ssb   rsa4096/7C87801D5E56F673 2017-01-30 [E]

sec   rsa4096/C24CD98AB0900706 2017-09-28 [SC] [expires: 2033-09-24]
uid                 [unknown] Jay Baker 
uid                 [unknown] Jay Baker 
ssb   rsa4096/4599729752E8D5C4 2017-09-28 [E] [expires: 2033-09-24]

I have two keys here, the second one is the one I made with keybase pgp gen. We want to grab the keyid string from the second line:

sec   rsa4096/C24CD98AB0900706 2017-09-28 [SC] [expires: 2033-09-24]

Let’s set a trust level for our key. Since we just made it, we can give it full trust.

$ gpg --edit-key C24CD98AB0900706
gpg (GnuPG) 2.2.1; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

sec  rsa4096/C24CD98AB0900706
     created: 2017-09-28  expires: 2033-09-24  usage: SC  
     trust: unknow      validity: unknown
ssb  rsa4096/4599729752E8D5C4
     created: 2017-09-28  expires: 2033-09-24  usage: E   
[unknown] (1). Jay Baker 
[unknown] (2)  Jay Baker 

gpg> trust

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y

sec  rsa4096/C24CD98AB0900706
     created: 2017-09-28  expires: 2033-09-24  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/4599729752E8D5C4
     created: 2017-09-28  expires: 2033-09-24  usage: E   
[ultimate] (1). Jay Baker 
[ultimate] (2)  Jay Baker 

gpg> quit

Now we can decide to do global or per repository signing with git. This step is optional, you can always manually sign commits with git -S or git --gpg-sign on a per commit basis.

$ git config commit.gpgsign true

This is per repository, add a --global flag after config if you want to enable gpg signing globally for git. If we change our minds later and want to disable signing just run the same command but with false. Also, if want to do just do a single commit without signing:

$ git --no-gpg-sign commit

Now let’s tell git which gpg key to use:

$ git config user.signingkey C24CD98AB0900706 # per repository

Again, add a --global flag if you want.

To verify that our commit worked:

$ git log --show-signature
commit 1f10113fadeae03fd8de870fb18c8563d0b3c602 (HEAD -> master)
gpg: Signature made Thu 28 Sep 2017 17:23:20 EDT
gpg:                using RSA key F21FC721B22B0C176BAFBE35C24CD98AB0900706
gpg: Good signature from "Jay Baker " [ultimate]
gpg:                 aka "Jay Baker " [ultimate]
Author: Jay Baker 
Date:   Thu Sep 28 17:23:20 2017 -0400
	detailed commit message goes here

Add our key to Github

Finally, we need to add our key to Github. Remember, your key will need the same email as your Github user email. You can add more email addresses to your key with gpg --edit-key and then the adduid command.

Let’s get our public key from keybase:

$ keybase pgp export


Copy that, and head to, click “New GPG key”, paste it in, then click “Add GPG key” to save it. verifi

That’s it! You can now have verified commits on Github with your keybase pgp key!


Installing Antergos Linux on a MacBook 4,1

March 12, 2016

Linux is a great way to breathe life into old hardware, but getting it to play nice with Mac hardware can be difficult. This guide will show you how to replace OSX entirely with Antergos Linux on a MacBook 4,1.

Before you begin

This guide assumes you want to completely remove Mac OSX from your hard drive and completely replace it with Antergos Linux. It is possible to dual boot with OSX, or even triple boot with Windows as well. For me, all I wanted was to put Linux on my aging MacBook. I don’t care about having OSX and I definitely don’t care about Windows.

This method doesn’t require any special bootloaders (like rEFInd), using only systemd-boot to get the job done. The upside to this is that it’s simple and follows The Arch Way. The downside is that you’ll have to hold down the alt/option key each time you start or reboot your MacBook. For me, this is a small price to pay, but if you think it will bother you, you’ll have to follow another guide. Good luck!

Finally, you should back up any important data before proceeding. It may also be useful to ensure you have an OSX installation medium handy, in case you change your mind and wish to reinstall OSX. That being said, let’s get to it!

Prepare the installation medium

You will need a USB drive with at least 8Gb of space. Head to the Antergos website and download the 64 bit iso. Follow these steps to create a bootable USB in Mac OSX. If you’re creating the USB on Linux then just do:

sudo dd if=/path/to/downloaded.iso of=/dev/sdX

Be patient, it will take a while. Replace /dev/sdX with the name of your inserted USB drive. Be Careful! The dd command can destroy your disks, always double check to be sure you’re specifying the correct drive with of=. A sure fire way make sure you’ve got the right drive is to remove the USB drive, open up a terminal window, and enter lsblk. This will print a list of all the drives connected to the system. Now insert the USB drive and enter lsblk again, your USB drive’s designation will be the one that wasn’t there before.

You could, of course, just burn the Antergos iso to a DVD, but who has time for that?

Install Antergos

Insert your newly created Antergos USB installer and reboot your MacBook. When you hear the Mac startup sound, press and hold the alt/option key. Click the little arrow underneath the orange USB drive symbol that says “EFI boot.”

Once the boot menu loads for the Antergos iso, I found that you’ll have to select the first option in the menu rather than the defualt. Select it and wait for the Gnome live environment to load. Once it loads, you’ll be presented with the beautiful Cnchi installer. Click through the various options, selecting your preferred Desktop Environment of choice.

Personally, I went with Cinnamon because my girlfriend will be using this MacBook alot and she’s already familiar with Cinnamon. The MacBook 4,1 seems to handle it pretty well (especially if you turn off some of the visual effects), but you’ll get even better performance with MATE, xfce, or Openbox. I would recommend against Gnome or KDE on a MacBook 4,1, however. They will run and perform tolerably for the most part, but they can become painfully slow at times. Cinnamon is about as flashy as you can get for a desktop environment on the 4,1.

When you get to the drive set up section, you can leave the default choice to let the installer prepare the drive, but on the Bootloader section, select systemd-boot. It won’t work out of the box for the MacBook 4,1, but it will lay the ground work for our next steps.

Once you’ve selected all of your preferred options in the installer, wait for the packages to be downloaded and installed. Depending on your selections and your internet connection, it could take a while. Once the installation completes, you’ll get a warning about systemd-boot possibly not installing correctly. No big deal, we’re about to fix that. Click the button to restart later.

Repair systemd-boot

Open up a terminal (Press the Command key to open up Gnome’s expose thing, and start typing “terminal”) and verify your partitions with lsblk.

Now, we need to mount the partitions that we’ve just installed Antergos on. If you let the installer partition the drive, then the following commands will work. If you set your own partition scheme, then you’ll need to make sure that your / partition gets mounted to /mnt and your /boot partition gets mounted to /mnt/boot.

sudo mount /dev/sda2 /mnt
sudo mount /dev/sda1 /mnt/boot

Now, we’ll need to chroot to our mounted partitions:

sudo arch-chroot /mnt

Now we’ll first verify that dosfstools is installed, then we’ll set up systemd boot and create a boot menu entry.

pacman -S dosfstools

bootctl --path=/boot install
nano /boot/loader/entries/antergos.conf

Paste in the following and save with ctrl+x:

title Antergos
linux /vmlinuz-linux
initrd /initramfs-linux.img
options root=/dev/sda2 rw elevator=deadline quiet splash resume=/dev/sda3 nmi_watchdog=0

Next, we update /boot/loader/loader.conf to recognize our custom entry:

nano /boot/loader/loader.conf

Paste in the following, and press ctrl+x to save. The timeout is the number of seconds before the default option is selected, since we only have one option I’ve set it to 1. Change it to whatever you like. Leave editor as 0 though, it’s a security risk to change it. There will be some options already there, just comment them out with #.

default  antergos
timeout  1
editor   0

Finally, we update systemd boot to recognize the configuration changes and exit out of our chroot:

bootctl update

Restart your MacBook, and remember to hold down the alt/option key when you hear the Mac startup sound. You’ll see your hard drive display with the text “EFI Boot” underneath. Click the little arrow to boot into your Antergos installation. Since we didn’t set up rEFInd, we’ll have to do this every time we power on or reboot the MacBook. It’s sort of a pain, but it’s the price you have to pay to have a Linux only installation, and come on, it’s not really that bad.


Arch linux + emby + Kodi + nginx: The Ultimate Media Server

November 30, 2015

I’ve been using Arch Linux as my media server/htpc for several years now and it’s been incredibly reliable. Some people prefer a versioned distribution with an LTS release for something like a media server, but I want the freshest packages, and I don’t want to deal with the headache of upgrading/reinstalling when an LTS release outlives its usefulness or won’t allow me to get the new packages I want. Recently, I decided to give emby a try, after hearing about it on one of my favorite podcasts, the Linux Action Show.

This guide assumes you have a working Arch Linux installation. If you’re starting completely from scratch, then consult the excellent Arch Beginner’s Guide. And if you don’t have time to go through a full Arch installation (thought I highly recommend it as you’ll learn a lot about Linux in the process), then you can always just go with Antergos, which is basically Arch with a nice installer and some sane defaults.


emby is a media server that can manage and stream your movies, tv shows, music, home videos to a plethora of devices. It’s a lot like Plex, but it comes with all of the best features free to use. It works with a server/client set up. You install and configure emby server, in our case on an Arch Linux rig (though they have support for many distributions and operating systems), and then access it from a client like your Web browser, a DLNA client like a PlayStation 4, Kodi media center, the Android or iOS apps, or any number of other options. Check out their download page to see what I mean. Their cross-platform support is some of the best for any app I’ve seen in a while, let alone a media server.


Installing emby

First, let’s create a user account for emby:

sudo useradd -r -s /bin/false emby

emby is available in the community repos. Install it with:

sudo pacman -S emby-server

Now that it’s installed, we’ll need to start the emby service with systemd:

sudo systemctl start emby-server
sudo systemctl enable emby-server

Check to make sure the emby-server service started properly:

sudo systemctl status emby-server
● emby-server.service - Emby brings together your videos, music, photos, and live television.
   Loaded: loaded (/usr/lib/systemd/system/emby-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2015-11-29 15:42:16 EST; 23h ago
 Main PID: 15379 (mono)
   CGroup: /system.slice/emby-server.service
           └─15379 /usr/bin/mono /usr/lib/emby-server/MediaBrowser.Server.Mono.exe -programdata /var/lib/emby -ffmpeg /usr/bin/ffmpeg -ffprobe /usr/bin/ffprobe

You should see something like that. If you get errors, check the logs with sudo journalctl -xe. When I first tried to start the service, I got an error saying that emby didn’t have permission to write to the /var/lib/emby/logs directory. If you get an error like that, you can fix it with sudo chown -R emby:emby /var/lib/emby

Once we’ve got the emby-server service started, let’s continue with the set up by pointing your web browser to http://localhost:8096. If you did everything correctly, you’ll see the emby welcome screen. The emby set-up wizard is really very easy to use, so I won’t go over every step, but you basically create a user account and tell emby where your media folders are and what kind of media they contain.

NOTE: emby can only scrape and recognize your media if it is named properly. It supports a number of naming conventions, so you’ll just have to pick one and make sure your media conforms to it. My media was already named properly, so I didn’t have to worry about this step, but there are a few tools, like filebot that can help you get your files in order.

Once you get through the initial set up, take a while to look through the options in the ‘Manager Server’ section:

Emby Manage Server Screen

There are a lot of options you can configure here. Familiarize yourself with all of the options and decide which are best for your set up. You can choose what libraries each user has access to, the max bitrate each user can stream at (useful if you live under the tyrannical rule of Comcast’s data caps), and where to save metadata files (fanart, posters, etc.).

Right now, you have a working emby server set up, you could just open port 8096 in your firewall and access your server from the web with https://your.external.ip.address:8096. emby even has built in ssl (https) support. But I already have nginx web server set up with ssl, and I don’t want the additional security risk and hassle of opening another port in my router.


nginx (pronounced “engine X”) is an awesome web server that’s fast, configurable and reliable. The configuration can be a little tricky to new users and users coming from Apache, but I actually prefer the nginx syntax now that I’m used to it. In any case, you can just cut and paste my configuration below to simplify the process.

Basically, nginx is going to act as a go-between for clients wanting to access it (and other web services we might be running, like transmission, subsonic, or owncloud). That means, we really only need to have 3 ports open in our router/firewall: http (80), https (443), and ssh (22). It also means, we can set up custom sub-domains, like

Installing and Configuring Nginx

First, install nginx with:

sudo pacman -S nginx

Once it’s installed, we’ll need to create a configuration file for our emby reverse proxy. Create /etc/nginx/conf.d/emby.conf in your text editor of choice and paste in my configuration below, making sure to change the server name to your own:

server {
    listen 80;
    return 301 https://$host$request_uri;

server {
        server_name; #change this to your domain!
        listen 443 ssl spdy;

        ssl_certificate           /etc/nginx/certs/emby.crt;
    	ssl_certificate_key       /etc/nginx/certs/emby.key;
	ssl_prefer_server_ciphers       On;
        ssl_protocols                   TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers                     'AES256+EECDH:AES256+EDH:!aNULL';
        resolver               valid=300s;
        resolver_timeout                5s;
        ssl_stapling_verify             on;
        keepalive_timeout               180;
        add_header                    Strict-Transport-Security max-age=31536000;
        client_max_body_size 1024M;

        location / {
                # Send traffic to the backend
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
                proxy_set_header Host $host;
                proxy_set_header X-Forwarded-Proto $remote_addr;
                proxy_set_header X-Forwarded-Protocol $scheme;
                proxy_redirect off;

                # Send websocket data to the backend as well
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";

There’s a lot going on there, but this configuration takes any http requests to and redirects them to use https. It then creates a proxy so that any requests to get redirected to That’s how we can keep the 8096 port closed. The emby clients will connect to nginx, which will then talk to the local emby server at Any data you send to or from your emby server will be encrypted and secured with ssl.


Our nginx proxy isn’t ready to go yet, we still need to create some self-signed certificates. Now, you could of course purchase an ssl certificate, but that’s sort of overkill for a home media server if you ask me … But if you already have some certs to use, put them in /etc/nginx/certs, Otherwise, let’s make some now.

sudo mkdir /etc/nginx/certs/
cd /etc/nginx/certs/
sudo openssl req -new -x509 -nodes -newkey rsa:4096 -keyout emby.key -out emby.crt -days 1095
sudo chmod 400 emby.*

The -days 1095 will set how long the certificate will be valid, adjust it to suit your needs. If you don’t understand the other settings, don’t mess with them. Running the openssl command will prompt you for some info, you can leave these blank if you want or fill them in, doesn’t really matter. chmod 400 emby.* will set some secure permissions on the certs so that other user accounts can’t mess with them.

Testing our nginx configuration

Once you’ve completed the above steps, let’s test our new set up.

sudo systemctl start nginx
sudo systemctl enable nginx

With the nginx service started, point your browser to You should be redirected to and be presented with the emby server log in screen. Your browser may give you a warning about an untrusted certificate, but that’s just because we’re using a self-signed certificate. We just created the damn things so we know it’s safe. Add a security exception for the cert in Firefox or select Proceed anyway in Chrome. Either way, you shouldn’t be bothered with this message again (unless the certificate is changed).

emby login screen

Success! You’ve now got your own personal Netflix. You’ve got access to all of your media wherever you have an internet connection in a secure, encrypted way. You can make user accounts for your friends and family to use and give them an easy to remember url.

But what about in your home theater? emby’s web client is pretty slick, but no one wants to open up a web browser on their tv to view their local media files. That’s where kodi comes in.


Kodi (formerly known as xbmc) is open source home theater software. It provides a stylish, themeable, easy to use UI to display and organize all of your media. It has great remote control apps for Android and iOS and supports IR devices as well. If you’re serious about creating the ultimate home theater/media server set up, then at the very least you should look into it.

Kodi can play our local media just fine without emby, and in fact that was the set up I had for years. It works great in this regard and it can handle just about any file type you can throw at it. Guests were always amazed by my kodi set up and how I controlled it with my smart phone (using the excellent yatse app). Where kodi falls short is that it while it can do streaming over DLNA, it doesn’t do transcoding. There’s also not a simple way to keep track of your played position/status if you have multiple kodi instances throughout the house. emby can solve these problems in an elegant way by acting as the backend for Kodi, while letting us keep the beautiful and functional Kodi UI for our big screen.

NOTE: The emby add-on for Kodi will replace your existing Kodi metadata database completely. If you have a working Kodi install, then be sure to make a back up of any user data before proceeding. You have been warned.

That being said, I have found emby to be much better at scraping titles and downloading all of the proper meta images (posters, fan art, banners, logos, etc.) Even though I spent years getting my Kodi database just right, a few minutes of work with emby and I’m happy to throw all that away. I’ve still got my backup of course, but I doubt I’ll ever use it. emby really is that good.

Install Kodi with:

sudo pacman -S kodi

As always, the Arch wiki has a ton of useful info on this package. Be sure to read up on the entry for Kodi if you’re unfamiliar with it.

Installing the emby add-on for Kodi

emby provides their own Kodi repository for their add-on, so installing it is just as simple as installing any other Kodi add-on. Download their repository .zip file.


Then open up Kodi and navigate to System > Settings > Add-ons > Install from zip file.

kodi system settings

kodi install from zip

Once the repo is installed, you’ll need to install the emby add-on itself. In Kodi, navigate to Add-ons > Video add-ons > Emby

install emby add on

Launch the add-on by going to Video add-ons > Emby. If your emby-server service is running, then it should auto detect emby-server for you. Just log in and emby will perform a metadata sync with emby-server. Again, this will completely replace any existing metadata that Kodi has collected. There are a few options you can set in the emby add-on, but if you’re running Kodi on the same machine that’s running the emby server the defaults should work for you.

The emby website has some great screen shots of the emby add-on being used with different Kodi skins to give you a taste of the awesomeness to come:

emby kodi screenshots

The emby add-on for Kodi doesn’t yet support streaming over the web, so if you want to run Kodi with the emby add-on set up on a different machine, you’ll have to share the media over the network using samba or nfs and then set up path substitution in emby, so that kodi can find the files on the network to play. You’ll still be able to track played position/status and see all of your metadata, etc., it just requires some more set up. I haven’t had to set that up yet, so I’ll save that for another guide.

In any case, you should now have a bad ass media center/server set up on Arch Linux!


OpenVPN in a Docker container on CentOS 7 with SystemD support

September 10, 2015

In this guide, we will set up Docker on a Digital Ocean CentOS 7 droplet, then set up OpenVPN. We’ll also discuss how to create a custom SystemD service file so that we can manage our container with systemctl commands.

VPNs (virtual private networks) are a great way to secure your internet traffic over untrusted connections. It can provide easy access to your home network file server or other machines that you don’t want to expose to the internet directly. Also, you can use it to circumvent blocked sites or services on a company network, or it can be used as a proxy to access restricted content in your country (like Hulu or Netflix).

Docker allows you to easily deploy and manage Linux containers: isolated, virtualized environments. Their isolation makes them secure and easy to manage, especially for developers who can develop their container, without worrying about which distro or even OS it will be deployed to.

This guide assumes you already have a CentOS 7 server set up. I recommend using Digital Ocean, but any provider which gives you root access will work as well. This guide should also work with any distribution that uses SystemD. I liked the set up so much, that I implemented it on my home Arch server as well.


First, let’s install docker:

sudo yum -y update
sudo yum -y install docker docker-registry

Once that’s done, we’ll start Docker and then enable it to start at boot:

sudo systemctl start docker.service
sudo systemctl enable docker.service

If you don’t want to have to type sudo every time you use the docker command, then you’ll have to add your user to the group ‘docker’. Do so with:

sudo usermod -aG docker $USER
newgrp docker

The second command will make your current session aware of your new group.


Now that Docker is up and running, we’ll need to set up busybox and OpenVPN. busybox is a super minimal docker image designed for embedded systems. We just want it for its small footprint. All we’re running is a VPN, so theres no need for extra fluff.

Get it set up with:

sudo docker run --name dvpn-data -v /etc/openvpn busybox
docker run --volumes-from dvpn-data --rm kylemanna/openvpn ovpn_genconfig -u udp://$DOMAIN:1194
docker run --volumes-from dvpn-data --rm -it kylemanna/openvpn ovpn_initpki

The first command pulls the busybox image and creates a new container called ‘dvpn-data’. The second command starts a container that will hold the configuration files and certificates. Replace $DOMAIN with the IP or domain of your server. Take note that port 1194 will need to be opened in your firewall. The thrid command will generate Diffe-Hellman parameters. It will take a long time so just be patient.

To open the required port in firewalld, issue the following command:

sudo firewall-cmd --permanent --zone=public --add-port=1194/udp

Now we need to create the credentials that will allow your client to connect to the VPN.

sudo docker run --volumes-from dvpn-data --rm -it kylemanna/openvpn easyrsa build-client-full $CONNECTION_NAME nopass
sudo docker run --volumes-from dvpn-data --rm kylemanna/openvpn ovpn_getclient $CONNECTION_NAME > $CONNECTION_NAME.ovpn

Replace $CONNECTION_NAME with whatever you want to call your VPN connection. I named mine after my server name. You will be asked to create a password during the process, just pick one. It will take a while to do some crypto stuff, but eventually you’ll get an ovpn file in your current director. This is what will allow you to add the connection to your client. You will need to securely move this to the machine that will be connecting to your vpn. rsync or scp are good options. You could even use a usb thumb drive.

Since the first machine I used this VPN for was a Mac I use at work, I chose Tunnelblick for my client. After it’s installed, double clicking on the ovpn file is all the set up that was needed to add the connection to Tunnelblick on Mac. Consult your client’s documentation if this doesn’t work for you.

Manage your new container with SystemD

Now that we’ve got all of the docker stuff out of the way, let’s create a custom systemd service file so we can manage our new container with the systemctl command. SystemD service files are like init or Upstart scripts, but can be more robust and even take the place of Cron.

In CentOS 7 and Arch, these files are kept in /etc/systemd/system/ so we’ll put our’s their too. Fire up your text editor of choice, for me it’s sudo vim /etc/systemd/system/dvpn.service, and paste in the following:

Description=OpenVPN Docker Container

ExecStart=/usr/bin/docker run --name vpn --volumes-from dvpn-data --rm -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
ExecReload=/usr/bin/docker stop && /usr/bin/docker run --name vpn --volumes-from dvpn-data --rm -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
ExecStop=/usr/bin/docker stop vpn


There’s a lot going on here, so let’s break it down. The stuff in the [Unit] section is straigtforward enough. We give our service file an arbitrary description. The Requires=docker.service and After=docker.service mean that this service won’t start until after the docker service has started.

The Restart=always means that our service will restart if it fails. The ExecStart= tells systemd what to run when we start the service. Let’s break this command down further, to help you understand what’s going on here:

You can find more info about the docker run command in the Docker documentation, it has tons of options. Of course, you could just check the man page for it, with man docker run.

Finally, the [Install] section basically allows us to enable the service to be enabled to start at boot. You can read more about systemd service files in this excellent tutorial: Understanding Systemd Units and Unit Files.

Now that our service is created we can start it and enable it to load at boot with:

sudo systemctl start dvpn.service
sudo systemctl enable dvpn.service

You can also check its status with sudo systemctl status dvpn.service

And that’s it! You now have a SystemD managed, Docker controlled, OpenVPN set up. Enjoy!

UPDATE:I tried following my own guide to create a vpn on my home Arch Linux rig and ran into some problems. You might get iptables errors when attempting to start the dvpn service file created above.

└─[13:19]$ docker run --name vpn --volumes-from ovpn-data --rm -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
Error response from daemon: Cannot start container d8afcbc7069b0530893779c9abf4d10aa73ab53f820c310a8baf2b956f79877c: failed to create endpoint vpn on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p udp -d 0/0 --dport 1194 -j DNAT --to-destination ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)

It is possibly due to either the xt_conntrack kernel module not loading or because you simply need to restart the firewalld and docker daemons to reload the iptables rules. Additional info can be found here. Try restarting the daemons first with:

sudo systemctl restart firewalld
sudo systemctl restart docker

If that doesn’t work, try loading the kernel module:

sudo modprobe xt_conntrack


Transmission Web Interface Reverse Proxy With SSL Using nginx on Arch Linux

July 1, 2015

Transmission has been my favorite torrent client for years now and one of my favorite features is the excellent Web interface, which let’s you control your torrenting over the web, allowing you to add, pause, etc. torrents when you’re away from whatever rig you have set up for that purpose.

The only problem with its Web interface, is that it just uses unencrypted http. You can password protect the interface, but you’re password is still sent via cleartext … meaning anyone that’s listening in on your connection can see your password or any other data being exchanged between transmission and wherever you’re accessing it from. Let’s fix that!

Note: This guide applies to Arch Linux, but should work for most other distributions, especially if they use systemd.


Transmission is available in the official Arch repositories, but there are several packages to choose from: transmission-cli, transmission-remote-cli, transmission-gtk, and transmission-qt. If this installation will be for a desktop machine, you may want to install the gtk or qt versions, but for our purposes, we’re going to go with transmission-cli and transmission-remote-cli. The first one, transmission-cli, will give us the transmission daemon and the web interface. transmission-remote-cli will let us access transmission through a curses based interface that you may find useful. Install them with:

$ sudo pacman -S transmission-cli transmission-remote-cli

Now that we’ve got them installed, we need to configure the daemon to set up the Web interface. You’ll need to start the transmission daemon or GUI version at least once to create an initial configuration file. Do so with:

$ sudo systemctl start transmission

Depending on which user you run transmission as, there’s a different location for the config file. If you’re running transmission as the user transmission (which is the default), then your config will be located at /var/lib/transmission/.config/transmission-daemon/settings.json. If you’ve set it to run as your user, then the config folder will be located at ~/.config/transmission-daemon/settings.json. If you’re using the gtk or qt version of transmission, then your config files are located at ~/.config/transmission.

Open it up in your editor of choice and look for these lines:
(Note: they do no appear in this order, I just pasted in only the lines that are relevant. You can read more about what each line does here.)

"download-dir": "/home/user/Torrents", #Set this to wherever you want your torrents to be downloaded to.

"peer-port": 51413, #This is the port that transmission will use to actually send data using the bittorrent protocol.

"rpc-enabled": true, #This enables the Web interface. Set it to true.

"rpc-password": "your_password", #Choose a good password.

"rpc-port": 9091, #Change the port if you want, or just make note of the default 9091.

After editing the config file, restart transmission so the changes will take effect with:

$ sudo systemctl restart transmission

Test that the Web interface is working by going to http://your.ip.address:9091/transmission/web/ … note that the trailing / after web is required, omitting it will prevent the interface from loading.

Now that the transmission daemon is started, you can access it via the command line with transmission-remote-cli. It is a perfectly functional way to control transmission, and assuming you have SSH set up securely, then it’s safe and encrypted. I like to have it installed in case I mess up my nginx set up somehow, but still need to access the transmission daemon remotely.


nginx is an http server, like apache, that can be used to serve up Web pages, or in this case, do a reverse proxy.

First, install it with:

$ sudo pacman -S nginx

Now we need to set up an ssl certificate:

$ cd /etc/nginx
$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout /etc/nginx/cert.key -out /etc/nginx/cert.crt

You will be prompted to enter some info. Keep in mind that his will be visible to anyone. The -days 365 will set how long the certificate will be valid. Change this if you like. This command will create two files: cert.key and cert.crt, which we will later reference in our nginx.conf

Let’s get nginx set up. Open /etc/nginx.conf and add the following line:

include /etc/nginx/conf.d/*.conf;

In some distributions, it might be there by defuault, but it’s not in Arch. Now we need to add a .conf file for our ssl reverse proxy:

$ cd /etc/nginx
$ sudo mkdir conf.d
$ sudo nano conf.d/transmission.conf

Paste in the following:

server {
    listen 80;
    return 301 https://$host$request_uri;

server {

    listen 443;

    ssl_certificate           /etc/nginx/cert.crt;
    ssl_certificate_key       /etc/nginx/cert.key;

    ssl on;
    ssl_session_cache  builtin:1000  shared:SSL:10m;
    ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
    ssl_prefer_server_ciphers on;

    location / {

      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;

      # Fix the "It appears that your reverse proxy set up is broken" error.
      proxy_pass          http://localhost:9091/;
      proxy_read_timeout  90;

      proxy_redirect      http://localhost:9091/;

That might seem complicated, but there are actually only a few things you’ll need to modify. Change server_name to whatever your domain is. You could also use an IP address here if you have a static IP. The ssl_certificate /etc/nginx/cert.crt; points to where your certificate is, if you named it something else in the earlier step, then edit this line and the next one. If you changed the port that transmission listens on for the Web interface, then be sure to change this line: proxy_pass http://localhost:9091/; to reflect it. Finally, put your domain and port on this line: proxy_redirect http://localhost:9091/;. Save the file and restart the nginx server:

$ sudo systemctl restart nginx

You should now be able to access the transmission Web interface by going to If your browser gives you a warning about an untrusted connection, then you know it works. Add an exception and continue. Your browser gives you that warning because the certificate isn’t signed by a third, trusted party. Don’t worry though, the connection is just as encrypted which is all we’re going for here anyway.

That’s it, you’re done! From here you could add reverse proxies to other local services, like kodi‘s web interface.

Also, now that you’re accessing transmission trough https (port 443), you can close the transmission port (9091) in your firewall to further lock down your system. Be sure to keep ports 80 and 443 open though.