Transmission Web Interface Reverse Proxy With SSL Using nginx on Arch Linux

July 1, 2015

Transmission has been my favorite torrent client for years now and one of my favorite features is the excellent Web interface, which let’s you control your torrenting over the web, allowing you to add, pause, etc. torrents when you’re away from whatever rig you have set up for that purpose.

The only problem with it’s Web interface, is that it just uses unencrypted http. You can password protect the interface, but you’re password is still sent via cleartext … meaning anyone that’s listening in on your connection can see your password or any other data being exchanged between transmission and wherever you’re accessing it from. Let’s fix that!

Note: This guide applies to Arch Linux, but should work for most other distributions, especially if they use systemd.


Transmission is available in the official Arch repositories, but there are several packages to choose from: transmission-cli, transmission-remote-cli, transmission-gtk, and transmission-qt. If this installation will be for a desktop machine, you may want to install the gtk or qt versions, but for our purposes, we’re going to go with transmission-cli and transmission-remote-cli. The first one, transmission-cli, will give us the transmission daemon and the web interface. transmission-remote-cli will let us access transmission through a curses based interface that you may find useful. Install them with:

$ sudo pacman -S transmission-cli transmission-remote-cli

Now that we’ve got them installed, we need to configure the daemon to set up the Web interface. You’ll need to start the transmission daemon or GUI version at least once to create an initial configuration file. Do so with:

$ sudo systemctl start transmission

Depending on which user you run transmission as, there’s a different location for the config file. If you’re running transmission as the user transmission (which is the default), then your config will be located at /var/lib/transmission/.config/transmission-daemon/settings.json. If you’ve set it to run as your user, then the config folder will be located at ~/.config/transmission-daemon/settings.json. If you’re using the gtk or qt version of transmission, then your config files are located at ~/.config/transmission.

Open it up in your editor of choice and look for these lines:
(Note: they do no appear in this order, I just pasted in only the lines that are relevant. You can read more about what each line does here.)

"download-dir": "/home/user/Torrents", #Set this to wherever you want your torrents to be downloaded to.

"peer-port": 51413, #This is the port that transmission will use to actually send data using the bittorrent protocol.

"rpc-enabled": true, #This enables the Web interface. Set it to true.

"rpc-password": "your_password", #Choose a good password.

"rpc-port": 9091, #Change the port if you want, or just make note of the default 9091.

After editing the config file, restart transmission so the changes will take effect with:

$ sudo systemctl restart transmission

Test that the Web interface is working by going to http://your.ip.address:9091/transmission/web/ … note that the trailing / after web is required, omitting it will prevent the interface from loading.

Now that the transmission daemon is started, you can access it via the command line with transmission-remote-cli. It is a perfectly functional way to control transmission, and assuming you have SSH set up securely, then it’s safe and encrypted. I like to have it installed in case I mess up my nginx set up somehow, but still need to access the transmission daemon remotely.


nginx is an http server, like apache, that can be used to serve up Web pages, or in this case, do a reverse proxy.

First, install it with:

$ sudo pacman -S nginx

Now we need to set up an ssl certificate:

$ cd /etc/nginx
$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout /etc/nginx/cert.key -out /etc/nginx/cert.crt

You will be prompted to enter some info. Keep in mind that his will be visible to anyone. The -days 365 will set how long the certificate will be valid. Change this if you like. This command will create two files: cert.key and cert.crt, which we will later reference in our nginx.conf

Let’s get nginx set up. Open /etc/nginx.conf and add the following line:

include /etc/nginx/conf.d/*.conf;

In some distributions, it might be there by defuault, but it’s not in Arch. Now we need to add a .conf file for our ssl reverse proxy:

$ cd /etc/nginx
$ sudo mkdir conf.d
$ sudo nano conf.d/transmission.conf

Paste in the following:

server {
    listen 80;
    return 301 https://$host$request_uri;

server {

    listen 443;

    ssl_certificate           /etc/nginx/cert.crt;
    ssl_certificate_key       /etc/nginx/cert.key;

    ssl on;
    ssl_session_cache  builtin:1000  shared:SSL:10m;
    ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
    ssl_prefer_server_ciphers on;

    location / {

      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;

      # Fix the "It appears that your reverse proxy set up is broken" error.
      proxy_pass          http://localhost:9091/;
      proxy_read_timeout  90;

      proxy_redirect      http://localhost:9091/;

That might seem complicated, but there are actually only a few things you’ll need to modify. Change server_name to whatever your domain is. You could also use an IP address here if you have a static IP. The ssl_certificate /etc/nginx/cert.crt; points to where your certificate is, if you named it something else in the earlier step, then edit this line and the next one. If you changed the port that transmission listens on for the Web interface, then be sure to change this line: proxy_pass http://localhost:9091/; to reflect it. Finally, put your domain and port on this line: proxy_redirect http://localhost:9091/;. Save the file and restart the nginx server:

$ sudo systemctl restart nginx

You should now be able to access the transmission Web interface by going to If your browser gives you a warning about an untrusted connection, then you know it works. Add an exception and continue. Your browser gives you that warning because the certificate isn’t signed by a third, trusted party. Don’t worry though, the connection is just as encrypted which is all we’re going for here anyway.

That’s it, you’re done! From here you could add reverse proxies to other local services, like kodi‘s web interface.

Also, now that you’re accessing transmission trough https (port 443), you can close the transmission port (9091) in your firewall to further lock down your system. Be sure to keep ports 80 and 443 open though.


Set Up ufw and fail2ban on Arch Linux

June 25, 2015

I recently set up ufw and fail2ban on my Arch Linux home server. It’s fairly simple, but I found that most of the guides I followed left a few things out or didn’t quite work with Arch. There are lots firewall options for Arch, but I went with ufw, as I had recently set it up on my droplet, according to this guide: How To Setup a Firewall with UFW on an Ubuntu and Debian Cloud Server and enjoyed the simple syntax.

ufw: Uncomplicated Firewall

Uncomplicated Firewall serves as a simple front end to iptables and makes it easier to set up a firewall. It’s available in the Arch repos, so let’s start by installing it:

# pacman -S ufw

Now that it’s installed, let’s configure it:

$ sudo ufw enable
$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing

These commands will turn on ufw and deny all incoming and allow all outgoing traffic. These rules won’t take effect until you restart ufw, but to be safe, the first incoming traffic we’re going to allow is our sshd port. That way, we won’t accidentally lock ourselves out once we restart ufw

$ sudo ufw allow 2222/tcp

This assumes that your sshd listening port is 2222. It’s important to change the listening port from the default 22 to help discourage brute force attacks. Since you’re reading a guide on setting up ufw and fail2ban, you probably already have your sshd configured to be safer, but if not then read up on how to on the Arch wiki.

Now open up any other ports your server might need:

$ sudo ufw allow 4040/tcp
$ sudo ufw allow 80/tcp
$ sudo ufw allow 443/tcp

These commands opened up the ports for the Subsonic music streamer and for http/https for Owncloud. If you have other services that need special ports open, allow them in the same fashion as above. Now that we’ve got all of the ports open that you will need (and double checked that you opened your sshd listening port), let’s get ufw going.

$ sudo ufw disable
$ sudo ufw enable
$ sudo systemctl enable ufw.service
$ sudo ufw status

The first two commands will restart ufw. Then we enable it to start on boot using systemd. Finally, we check to see if all of our efforts have worked. If it did, you should see something like this:

└─[09:11]$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
80/tcp                     ALLOW       Anywhere
443/tcp                    ALLOW       Anywhere
4040/udp                   ALLOW       Anywhere


Fail2ban watches for incoming ssh requests and takes note of IPs that fail too many times to log in, and automatically bans them. This is a great way to stop trivial attacks, but doesn’t provide full protection. If you haven’t already, please set up SSH keys.

First, let’s install fail2ban:

# pacman -S fail2ban

Now we’ll need to create some custom rules and definitions for it, so it can play nice with ufw. Create a jail file for ssh: sudo nano /etc/fail2ban/jail.d/sshd.conf and insert the following:

enabled = true
banaction = ufw-ssh
port = 2222
filter = sshd
logpath = /var/log/auth.log
maxretry = 3

Be sure to change the port number to whatever port sshd listens to on your machine. Save it and let’s move on. Create another file: sudo nano /etc/fail2ban/action.d/ufw-ssh.conf and insert the following:

actionstart =
actionstop =
actioncheck =
actionban = ufw insert 1 deny from  to any app OpenSSH
actionunban = ufw delete deny from  to any app OpenSSH

Now we simply start the service, enable it to load at boot, and check its status to make sure it’s working:

$ sudo systemctl start fail2ban
$ sudo systemctl enable fail2ban
$ sudo systemctl status fail2ban

And that’s it! We now have ufw protecting us with a firewall and fail2ban banning stupid script kiddies and bots from bugging us.


Convert a Debian 7 Digital Ocean Droplet into Arch Linux

June 10, 2015

I love Digital Ocean. It’s the cheapest, fastest, easiest way to get a Linux virtual server up and running. They’ve got a great interface too:

Hmm, something seems to be missing here ...

Hmm, something seems to be missing here …

There are some great choices here, no doubt, but where’s Arch Linux? Well, DO dropped their support for Arch Linux as it was apparently too difficult to support a rolling release for them. Fair enough, I guess, but but what about those of us who want Arch anyway? Experienced Arch users aren’t exactly the type to balk at a lack of official support though. Besides, who needs support when you’ve got the Arch Wiki?

I was content to stick with my Ubuntu and CentOS droplets, until I came across this github project: digitalocean-debian-to-arch. Basically, it’s a script that will turn a Debian 7 digital ocean droplet into a super lightweight Arch droplet.


Just spin up a new Debian 7 droplet (32 or 64 bit) and once you get it up and running, ssh in (or use Digital Ocean’s console access from their Web UI) and run the following command as root:

wget && bash

Answer yes when prompted and then just wait! In a few minutes you’ll have a fully up to date Arch Linux droplet.

Warning: Always be wary of running random commands you find on the internet. You can view their script here and see that it checks out. It worked great for me, but it’s best practice to be wary of this type of thing in general. There’s not much at stake here though, since you’re running it on a virtual machine you just created and can easily delete.

Set up your new Arch Linux Droplet

Once the script finishes and the droplet reboots, log back in and let’s get Arch set up:

Look at the RAM usage, a measly 24MB! Obviously that will change as we set up services, but for now this droplet is blazing fast.

Look at the RAM usage, a measly 24MB! Obviously that will change as we set up services, but for now this droplet is blazing fast.

A great place to start is the General Recommendations Arch wiki page. It is a must read for new users. For now though, let’s just do a few basics.

User Accounts

It is considered best practice not to log in as root, but to use su or sudo to perform necessary system tasks. Let’s create a new user, jay, and set a password for it:

# useradd -m -G wheel -s /bin/bash jay
# passwd jay

Now we need to grant our user sudo access:

# pacman -S sudo
# EDITOR=nano visudo

This will open the sudoers file, which lists which users and groups have sudo access. This command will open it in nano, but you could substitute your editor of choice or use the default vi. Scroll down to this line and uncomment it by removing the # at the beginning of the line:

%wheel      ALL=(ALL) ALL

This will let any members of the group wheel have sudo access. Hit ctrl-x and then y to save.

Now that we’ve got our user created, let’s create an authorized_keys file for our user. If you don’t know about ssh keys, then check the Arch wiki before proceeding: SSH keys. Once you have an ssh public key ready to go, then just add to your .ssh/authorized_keys file.

$ su - jay
$ mkdir .ssh
$ nano .ssh/authorized_keys

Paste in your public key here, and save it with ctrl+x, then y. Setting the correct file permissions is our next step:

$ chmod g-w /home/jay
$ chmod 700 /home/jay/.ssh
$ chmod 600 /home/jay/.ssh/authorized_keys

Now it’s time to lock down ssh and make it more secure.

$ sudo nano /etc/ssh/sshd_config

Open up the sshd config file and change Port 22 to some other number. Security through obfuscation, the default port is 22 and just by changing it to another number you can prevent a lot of automated attempts to gain access to your server. You should also set PermitRootLogin to no, PubkeyAuthentication to yes, and PasswordAuthentication to no. Ctrl + x, then y to save.

Where to go from here?

I’ve only just set this up today, so I haven’t really done anything other than what’s shown here. I’m very excited to start playing with it now. I might set up Owncloud and ditch my Ubuntu droplet. Or I could set up a VPN server that I could turn off and on whenever I’m stuck with an unsafe wifi connection. With Arch, there’s not much you can’t do. Whenever I find a use for this thing, I’ll do a post on it.

Arch Linux: The Ultimate Home Server Distro

June 9, 2015
Archey output on my home server, affectionately known as hal.

Archey output on my home server, affectionately known as hal.

Arch Linux may not be most people’s first choice for a home server, or even a workstation, but if you can get past the learning curve it’s one of the best options. After trying Ubuntu, Debian, Linux Mint, and Elementary on my HTPC/home server I finally settled on Arch Linux. Why? That’s what I’ll try to convince you of in this post.

The distros I listed are great and certainly have their uses, but for my needs they just couldn’t cut it. Yes, I realize that they are all basically Debian/Ubuntu derivatives and that I never tried OpenELEC, or say, PCLinuxOS, but after trying Ubuntu and friends I had learned enough about what I needed in an OS, to know that I needed Arch.


Arch Linux doesn’t come with much, just a small base system from which you can add whatever you want to create the ultimate, customized OS of your dreams. For beginner’s, this can seem daunting. The installer doesn’t even come with a GUI. If you have experience with Linux, then it’s not that bad. If you’re brand new to Linux, then Arch might not be for you. Then again, Arch was my first Linux experience and I survived it somehow (after 5-6 attempts to install it ;). It can seem a bit tedious to install things which are there by default on other OS’s, but you’ll learn a lot about Linux and your set up specifically. This way, if something breaks later you’ll have a better understanding of where to look to fix it.

One does not simply install arch linux correctly the first try ...

If you follow the Beginner’s Guide closely then your install should work just fine. Despite the above meme, it is definitely possible to install Arch Linux correctly the first time. If you simply don’t have the time for an Arch install, then check out Antegros. It’s basically Arch, but with some defaults already configured for you and a nice installer. You’ll end up with a full Arch system, but without all of the hassle.

The small base system may make set up a bit tedious, but it also means that when you’re done you’ll have an OS that is perfectly tailored to your needs. It will have only the packages you need and nothing you don’t.

Keep rollin’

With a rolling release, all of your software (including the core system components, like the Linux kernel) is updated shortly after upstream. This means that you will always have the latest and greatest improvements, security updates, and bug fixes. There is no need to upgrade to a newer version of the OS, because you’re always at the newest version of everything.

When your favorite software announces a new feature or security fix, you’ll get it right away


A rolling release? Stable?! I know, how can your system be stable when it’s always changing? It’s true, occasionally riding the bleeding edge can sometimes result in things breaking. But after using Ubuntu for a year and a half and Arch for 1, I spent way more time fixing Ubuntu than I ever have Arch.

 .--.                  Pacman v4.2.1 - libalpm v9.0.1
/ _.-' .-.  .-.  .-.   Copyright (C) 2006-2014 Pacman Development Team
\  '-. '-'  '-'  '-'   Copyright (C) 2002-2006 Judd Vinet

If you stay on top of your updates (at least weekly or every other week) and pay attention to what pacman says during those updates then you should be fine. It may just be anecdotal evidence on my part, but I have had so few problems using Arch compared to other distros. Pacman, Arch’s package manager, is amazing. It is easily my favorite out of all the distros I’ve tried (yum comes in second for me).

AUR you kidding me?

The Arch User Repository is, in a lot of ways, what makes Arch shine. The AUR contains packages not officially supported by Arch, submitted by users. It can seem a little shady running code from some random person on the internet, but each package has a comments section and if you’re unsure about something, the comments should clear it up. One should always be careful when installing unsupported software, but the same is true of PPAs or other similar systems.

The AUR has everything. Just about any software you might want to install is already there. With an AUR helper like packer or yaourt, it becomes extremely easy to install almost any Linux software you can think of.

Arch Wiki to the rescue

Back when I used Ubuntu and her derivatives, when something broke or I couldn’t quite understand the man pages, I turned to Google. Most issues I was having, had solutions on the Arch Wiki, which is what tops the results of Google searches about specific Linux packages. Hands down, the Arch wiki represents some of the best Linux documentation on the interwebs. And if you can’t find the solution to your problem there, then the forums are active and filled with people who know their stuff. Ubuntu forum members may be friendlier, but Arch forum members definitely are more knowledgeable. Sure, they can seem a bit ornery to newbies, but that only because you probably didn’t check the wiki or search the forums first …

My setup

So here’s what I have my home server set up to do:

Packages fail to build with packer

May 4, 2015

I was doing a system upgrade today for my Arch home server using packer over an ssh connection. My connection dropped out and the upgrade failed. No big deal, I figured I’ll just reconnect and try again.

Well, wine-silverlight was in the middle of upgrading when the connection dropped out and when I retried the upgrade, I got this error:

error: dlls/comctl32/icon.c: already exists in working directory
ERROR: Failed to apply patch, aborting!
==> ERROR: A failure occurred in prepare().
The build failed.

There are a number of things which might cause packer to throw this error, as I learned from google. But it turns out that there is a really simple solution in my case. Packer was downloading the package files to the /tmp/packerbuild-1000/wine-silverlight directory. Something must have gotten mucked up there when the connection dropped out earlier.

A simple fix is to just remove this directory with:

$ sudo rm -R /tmp/packerbuild-1000/wine-silverlight

Then just retry your upgrade with:

$ packer -Syu

Everything should work now!

WordPress Automated Backup Script with rsync and systemd

April 16, 2015

Need to back up your WordPress site automatically? Of course you do! There are loads of ways to do this, but I settled on a solution that is simple, reliable and automated.

I modified a simple script I found here and used systemd to create a daily timer and service unit to execute the script daily, and then used systemd on my local machine to mirror the backups with an rsync one-liner script. Let’s get started! Note: This guide assumes you have ssh access to the server where your WordPress is hosted. If you don’t then this guide is not for you.

The Backup Script

First, let’s take a look at the script:


# taken from
# a simple script to back up your wordpress installation and mysql database
# it then merges the two into a tar and compresses it

TIME=$(date +"%Y-%m-%d-%H%M")



tar -cvf $BACKUP_DIR/$FILE --transform $WWW_TRANSFORM $WWW_DIR


tar --append --file=$BACKUP_DIR/$FILE --transform $DB_TRANSFORM $BACKUP_DIR/$DB_FILE



exit 0

There’s a lot going on here and I won’t take the time to explain every bit of it, but basically, this script makes a tar of your /var/www/html directory (where WordPress usually lives), dumps your mysql database (where your posts are stored), appends that to the tar of your /var/www/html directory so it’s one file, adds today’s date to it, and then compresses it. If you want a more detailed explanation, read the original guide to this script here.

SSH to the server where your WordPress lives and open up your text editor of choice, I’ll use nano for this guide. First though, let’s make a place to keep your scripts:

$ mkdir ~/Scripts
$ mkdir ~/Backups
$ cd Scripts
$ nano

Paste in the the script and edit it to suit your site. Once you’re done, hit ctrl+x to save. Now we need to make the script executable. Test it, with the second command. The ./ means the current directory. Unless an executable file is in /usr/bin/ or a similar directory for your distro, then you need to specify the full path of the file to be executed. You could also type out the full path to the script, but since we’re already in the directory, we’ll use the shortcut.

$ chmod +x
$ ./

If it worked, your terminal will be flooded with all of the folder names that tar just archived. To verify, let’s move to the Backups directory and see what our script made for us:

$ cd ~/Backups
$ ls

To open up your archive, just use tar -xvf. The database directory will contain your mysql dump and the www directory contains the contents of your /var/www/html directory.

$ tar -xvf
$ ls

To Cron or Not to Cron

Cron represents the easiest way to automate this script for daily backups. For example, you could just do:

$ crontab -e
@daily /home/user/Scripts/

And cron would run your script every day at midnight.

Cron is great, but since my WordPress lives on a CentOS 7 server and my local machine runs Arch (which doesn’t even come with cron by default), I wanted to use systemd to automate my script. Not only is it an opportunity to learn more about systemd, it will make things easier in the future if I want to exapnd on this simple script and backup solution.

First, let’s make a .timer file. .timer files work sort of like crontab entries. You can set them up for the familiar daily, hourly, or weekly timers or you can get super specific with it. Read the wiki for more info.

$ sudo nano /etc/systemd/system/daily.timer
Description=Daily Timer



The key part here is the OnCalendar=daily. Timers can be set to all sorts of values here, but we’ll be using daily. You could set yours to weekly or even monthly if you like. The other thing to take not of here is Unit=wp.backup.service. That’s the service file that will call our back-up script. When you’re done editing, hit ctrl+x to save. Let’s make that service file now:

$sudo nano /etc/systemd/system/wp.backup.service
Description=Wordpress Backup Script



Change the ExecStart=/home/user/Scripts/ to wherever you saved your script and hit ctrl+x to save. To test that it works, type:

$ sudo systemctl start daily.timer
$ sudo systemctl start wp.backup.service

Now let’s check the status of our new timer and service:

$ sudo systemctl status daily.timer && sudo systemctl status wp.backup.service

Should return something like this:

daily.timer - Daily Timer
   Loaded: loaded (/etc/systemd/system/daily.timer; enabled)
   Active: active (waiting) since Thu 2015-04-16 15:06:44 EDT; 33min ago

Apr 16 15:06:44 example systemd[1]: Starting Daily Timer
Apr 16 15:06:44 example systemd[1]: Started Daily Timer.
wp.backup.service - WordPress Backup Script
   Loaded: loaded (/etc/systemd/system/wp.backup.service; enabled)
   Active: inactive (dead) since Thu 2015-04-16 15:06:54 EDT; 33min ago
  Process: 332 ExecStart=/home/user/Scripts/ (code=exited, status=0/SUCCESS)
 Main PID: 332 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/wp.backup.service

Apr 16 15:06:46 example[332]: /var/www/html/wp-content/plugins/jetpack/_inc/lib/
Apr 16 15:06:46 example[332]: /var/www/html/wp-content/plugins/jetpack/_inc/lib/markdown/
Hint: Some lines were ellipsized, use -l to show in full.

So why have two separate systemd files for what seems like one job? Well with this set up, it’s easier to add more complexity later if we need to. You could add other options for when backups should occur. You could add other scipts to be executed daily. Plus, you can call your backup script with sudo systemctl start wp.backup.service and check its status with sudo systemctl status wp.backup.service.

So now that we have our server making automatic daily backups, let’s backup the backups to an offsite location. If the server goes down, or your hosting company goes out of business, then you’ll be out of luck if your only backups live on the server that you can no longer access.


Rsync is one of my favorite linux tools. It can sync files and directories over ssh and just sync the differences in the files, making it perfect for our needs. I have an Arch linux home server that will function as the offsite backup location, but you could use another distribution. You can even use rsync on OSX, but that’s another guide.

If you haven’t done so already, you’ll need to set up ssh keys for your server and local machine. The Arch wiki page on ssh keys should have all of the info you need. There are also a plethora of guides on the interwebs that can help.

Once your ssh keys are set up, let’s set up an rsync command to pull the backup archives from the server and mirror them to our local machine:

rsync -avzP -e "ssh -p8888"* /home/user/Backups/

Let’s break this down. -avzP tells rysnc to archive, use verbose mode, compress, and print the progress to the terminal, respectively. The -e "ssh -p8888" tells it that the remote server uses port 8888 for ssh. The* uses the wildcard “*” to grab any files in the Backups directory on the remote server. And the last bit tells rysnc where to put the files.

Since we’re going to be using this command in a script that gets executed by systemd, we need to take into account that it will be executed as root. This can cause problems as ssh will look in the /root/.ssh directory to find the private key file and won’t find it, causing the command to fail. To fix this, let’s add an entry to our /etc/ssh_config file to define our server and tell ssh where to find the private key for it.

$ sudo nano /etc/ssh/ssh_config
Host example
    Port 8888
    User user
    IdentityFile /home/user/.ssh/id_rsa

Add the above lines to your ssh_config and substitute your information. Ctrl+x to save. Now you can ssh to your server with just ssh example. You could also do rsync -avzP example:/source/files /destination/directory.

Now that we’ve got that sorted, let’s make it a script:

$ nano ~/Scripts/
# one-liner to backup wordpress files from

rsync -avzP example:/home/user/Backups/* /home/user/Backups/

Ctrl+x to save. Don’t forget to make it executable with chmod +x, and then test it with ./ Now that we’ve got the script working, let’s set up our systemd timer and service files. These will be the same as the ones on the server, so just cut and paste those, being careful to change the name of the script. You could also change your timer to weekly if you like.

Once you’ve got them set up the way you want, then test and enable them with:

$ sudo systemctl start daily.timer
$ sudo systemctl start wp.offsite.backup.service
$ sudo systemctl enable daily.timer
$ sudo systemctl enable wp.offsite.backup.service

And that’s it! We now have automatic, daily backups of our WordPress installation and mysql database which are then mirrored to another machine with rsync: automatically and daily. From here, we could write further scripts to make monthly archives and move them to external storage or just remove any backups older than a month.


Digital Ocean Owncloud with an sshfs Tunnel from Local Machine

April 10, 2015

Accessing or syncing your files between any device is quite popular these days, but there are a plethora of options to choose from and it’s hard to pick a definitive winner. Since btsync recently came out with version, I’ve been rethinking my options.

Luckily, I happened to catch Tzvi Spits on LINUX Unplugged talking about his set up: autossh tunnel from his home Arch machine to his droplet, which uses sshfs to mount his media and Seafile to serve it up with a nice Web interface.

Seafile sounds cool, but I’m already invested in OwnCloud as I’ve got it running on my own Digital Ocean Ubuntu 14.04 droplet. With only 20 Gb of storage on the droplet though, I need a way to access all of my media in OwnCloud that doesn’t involve syncing.

Plan of Attack

Basically, we’re going to use autossh to create a tunnel to our remote server from our local machine. On the remote server, we’ll use sshfs to mount a few directories from our local machine on the remote server, then we point OwnCloud to the directories mounted with sshfs. Then we’ll set up a systemd unit file so we can manage our tunnel with systemctl and enable it to start at boot (I’ll also show you how to do this with cron, if your distro doesn’t use systemd). Finally, we’ll add the sshfs mounts to the server’s /etc/fstab so they are loaded at boot. This will let us use OwnCloud on our remote server as a secure, easy to use, Web interface to access all of our media and files on the local machine.


This guide assumes you already have OwnCloud installed. If you don’t have it installed yet, then I recommend you use Digital Ocean’s one-click Install for Owncloud and not have to bother with setting up a LAMP stack and installing OwnCloud. If you’d rather set things up yourself though, there’s a tuturoial for that too: How to Install Linux, Apache, MySQL, and PHP on Ubuntu 14.04. You’ll then need to follow this guide to set up OwnCloud: How to Setup OwnCloud 5 on Ubuntu 12.10. I know the versions are different, but it will still work.

If you like the idea, but don’t want to use OwnCloud, then check out Tzvi‘s guide for how to use sshfs and Seafile to access your files. He also does some of these steps differently than this guide so seeing how he accomplishes all of this might help you if this guide isn’t working for you.


First, as you might have guessed we’ll need to set up ssh. If you haven’t done this already, it’s fairly straightforward. If you’ve already done this for your server, then skip ahead. We’ll first need to install openssh on the local machine. On Arch, it’s just sudo pacman -S openssh.

Now, we need to generate a key pair so on the local machine, using ssh-keygen. I like to use it with these options:

$ ssh-keygen -t rsa -b 4096 -C "$(whoami)@$(hostname)-$(date -I)"

Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/
The key fingerprint is:
dd:15:ee:24:20:14:11:01:b8:72:a2:0f:99:4c:79:7f username@localhost-2014-11-22
The key's randomart image is:
+--[RSA  4096]---+
|     ..oB=.   .  |
|    .    . . . . |
|  .  .      . +  |
| oo.o    . . =   |
|o+.+.   S . . .  |
|=.   . E         |
| o    .          |
|  .              |
|                 |

You’ll be prompted where to save the keys and to enter a passphrase. For our purposes, just hit enter and use the defaults. You can read up on the different options on the Arch wiki for ssh keys, or just check the man pages.

Now that we’ve got our key pair generated, we’ll need to copy the public key ( to the server’s .ssh/authorized_keys file.

cat ~/.ssh/
ssh rsa AAAA ...

Select everything that cat displays for us and copy it to your clipboard (ctrl+shift+c works with most terminal emulators). Let’s ssh to the remote server now:

$ ssh -p  user@remoteserver.domain
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ nano authorized_keys
$ chmod 600 ~/.ssh/authorized_keys

Once you’re logged in, we’ll need to create the .ssh folder, if it doesn’t already exist. Next we’ll set the permissions on that folder so that only the user account has read/write/execute privileges on the .ssh folder. Then we create the authorized key file using the text editor nano. Now we paste in our public key with ctrl+shift+v. Save the file with ctrl+x. Finally we lock down the permissions on the authorized_key file itself, meaning only the owner can read/write the file. While you’re logged in, you may want to change some of the options in /etc/ssh/sshd_config on the remote serve to make it more secure (like changing the default port, allowing only certain users, etc.). Check the Configuring SSHD section in the Arch wiki for more info.

Once you’re done with that, close the ssh connect with exit and try to ssh to the remote server again. This time, it shouldn’t ask you for a password. If it does, check that your permissions are in order. If you still have trouble, then the Arch wiki has a great troubleshooting section on the ssh page. If that doesn’t solve it, turn to google because we will need the keys to work for the rest of this guide.

Everyday I’m tunnelin’

SSH tunnels let you bind a port on a remote server back to a local port so that any traffic going through the port on the remote machine, gets sent back to the local machine.

$ ssh -p222 -nNTR 6666:localhost:2222 user@

In this example, -p222 specifies the ssh listening port for remote server ( 6666 is the port on the server that will be tunneled back to port 2222 on our local machine. User is the username on the the remote server Substitute the values in the example with your own and test it. Once you’ve established the tunnel from the local machine to the remote server, let’s ssh in to the remote server and verify that we can reverse tunnel back to the local machine.

$ ssh -p user@remoteserver.domain
[user@remoteserver ~]$ ssh -p6666 user@localhost
[user@localmachine ~]$ 

It works! Log out of the remote server and close the ssh tunnel. Now that we know how to set up a tunnel, let’s do it with autossh. autossh is a great tool for establishing and maintaining an ssh connection between two machines. It checks to make sure the connection is open and re-establishes it if it drops out. Let’s try to do the same thing, but this time with autossh:

$ autossh -M 0 -nNTR 6666:localhostt:2222 -p222 -i /home/user/.ssh/id_rsa

As you can see, the command for autossh looks a little different, but it’s basically doing the same thing. Substitute your values with the ones in the example. The -p222 is the sshd listening port on the remote server still. Also, don’t forget the change user in the -i part to your username. That will be important for the next step. Once you can establish a tunnel with autossh. Double check that it works on the remote server by ssh’ing into it and enter ssh -p6666 user@localhost. Once that works, we’ll need to run the autossh command one more time as root.

$ sudo autossh -M 0 -nNTR 6666:localhostt:2222 -p222 -i /home/user/.ssh/id_rsa

That’s why we specify the location of the identity file, so that autossh doesn’t try to look in /root/.ssh/ It will also ask you to verify that you want to add your remote server to the list of known hosts. Say yes.

Starting autossh at boot

We need a way to start ssh at boot. There are lots of ways to do this, but since Arch is drinking the systemd kool-aid, we probably should too. If you’re on a distribution that also uses systemd then these instructions should work for you too, but I’ve only tried them on Arch.

Systemd uses .service units to manage system processes. You can read more about it on the Arch wiki if you want: systemd. Let’s make a service unit for our autossh command to start at boot. Systemd keeps some unit files at /etc/systemd/system/ and that’s where we will put our autossh.service file.

$ sudo nano /etc/systemd/system/autossh.service

Description=AutoSSH service

ExecStart=/usr/bin/autossh -M 0 -nNTR 4321:localhostt:1234 -i /home/user/.ssh/id_rsa


Hit ctrl+x to save. A couple things are worth pointing out here. First, systemd will run this as root. That’s why we had to run our autossh command as root earlier to add our remote server to the list of known hosts. Second, lots of guides for reverse tunneling out there include the -f option, which sends the command to the background and gives you control of your terminal again. That option will not work on systemd as explained here so be sure not to include it. The same effect is achieved by the Environment="AUTOSSH_GATETIME=0" line.

Now let’s test our new service file:

$ sudo systemctl daemon-reload
$ sudo systemctl start autossh

SSH into your remote server and check that the reverse tunnel still works with ssh -p6666 user@localhost. If it does then we can enable it back on the local machine with:

$ sudo systemctl enable autossh

If your distro doesn’t use systemd, then you can just do a crontab entry. Cron is a system daemon that runs processes at scheduled times or at certain events. All we need to do is add an @reboot entry with:

$ crontab -e
@reboot autossh -M 0 -f -nNTR 4321:localhostt:1234 -i /home/user/.ssh/id_rsa

Save the entry with whatever the method is for your system editor, ctrl+x if it’s nano. If your system editor is vim, then before you can input the text, actvate insert mode by pressing “i”. Once your command is entered, hit escape to exit insert mode and then save and quit with “:wq” then “enter”. Notice that this time we included the -f flag for autossh. This will send the process to the background. Do not put the -f flag with the -nNTR options. Those are the ssh options and -f is a different option for ssh than it is for autossh.


Now that we’ve got the reverse tunnel set up, let’s put it to work with sshfs, an awesome utility for mounting remote file systems over ssh. Let’s install it on our remote server. Since mine runs Ubuntu 14.04, here are the commands I used:

$ sudo apt-get update
$ sudo apt-get install sshfs

Once installed, we can mount folders on our local machine to our remote server. SSH into your remote server and give it a try:

$ sshfs -p5555 user@localhost:/home/user/Photos /home/user/Photos -C

This will mount the /home/user/Photos directory on the local machine to the /home/user/Photos directory on the remote server. Don’t forget to specify what port we are using for the tunnel, NOT the ssh listening port of your local machine. In this example it is 5555. The -C means to use compression. cd in to your /home/user/Photos directory on the remote server and make sure that the files are there and correspond to what’s on the local machine. If you have different usernames on the local machine and server then you might have to specify some UID options.

Since we’re going to be using OwnCloud to serve up these files later, let’s go ahead and make sure the the www-data user can acces them. Otherwise Owncloud won’t be able to see the folders.

$ sudo nano /etc/fuse.conf

Uncomment this line, or add it if it’s not there:


Save and quit with ctrl+x. Now, let’s add our sshfs mount to the remote server’s /etc/fstab so that each time the server restarts it will remount our directory.

$ sudo nano /etc/fstab
# /etc/fstab: static file system information.
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
# / was on /dev/vda1 during installation
UUID=050e1e34-39e6-4072-a03e-ae0bf90ba13a /               ext4    errors=remount-ro 0       1

user@remoteserver.domain:/home/user/Photos /home/user/Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

You can add as many other mounts as you need on this file, just be sure to use the same options. if you don’t have delay_connect, it may fail to mount at boot. If you can mount the sshfs directory with sudo mount -a (the command to mount everything specified in the /etc/fstab), but it doesn’t work at boot then you need the delay_connect. The allow_other option will let other users on the system use the mounted directories which will be useful for when we get Owncloud set up.

Another thing to take note of here is that you can not have spaces in a directory name in the /etc/fstab. For example:

user@remoteserver.domain:/home/user/My Photos /home/user/My Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

Will not work and will give errors when using sudo mount -a. You might think to try /home/user/My\ Photos as you would in Bash shell, but that will not work in the /etc/fstab either. Spaces must be handled with “\040″. For example:

user@remoteserver.domain:/home/user/My\040Photos /home/user/My\040Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

To test it, reboot your server and see if your sshfs directories are there.

Owncloud External Storage

Owncloud has an awesome feature that lets you add directories that aren’t in your /var/www folder. To enable it, just log in to OwnCloud, and click the ‘Files’ drop-down menu at the top left. Then click ‘Apps’, and then the ‘Not enabled’ section. Scroll down to ‘External Storage Support’ and click the enable button.

Now click the user drop-down menu at the top right and click ‘Admin’. Scroll down to ‘External Storage’, click the ‘Add Storage’ menu and then click ‘Local’. Give your folder a name (this is what will be displayed in OwnCloud) and point to the right directory. Note that OwnCloud can handle spaces in your directory path just fine. Next make the folder available to at least your user. If you did everything right then there will be a little green circle to the left of the folder.

Head back to your files view and you should be able to browse your sshfs mounted directories. For me, it’s like having a 4TB Owncloud Droplet! Well, sort of. Access speeds aren’t that great and OwnCloud can get bogged down when searching through really huge directories (especially on the smallest droplet like I have), but for just casual Web access to your files it works great.



April 8, 2015

My good friend Eric gave me some Soylent to try today.


Soylent Meal Replacement

The taste is fairly bland, but not unpleasant. I had it instead of lunch today and it genuinely filled me up. I would definitely try it again.

Simple rsync Helper Script

If you have multiple servers that you send files over rsync to regularly then this simple little rsync script might be helpful to you.

I love using rsync, but mostly I use the same options and only a few destinations. To send a file to my homelab server, I would have to type:

rsync -avzP -e "ssh -p 1234" /location/of/source/file.txt

True, it’s not that much to remember, but I wanted a faster way to do it. I also wanted to specify a default “Uploads” folder so that I could just send the file and figure out where to put it later. With my script alias’d to ‘rs’ in by .bashrc, all I have to type now is:

rs /location/of/source/file.txt servernickname /destination/file.txt

And if I don’t feel like specifying a location, I can just do:

rs file.txt servernickname

This will send the file to the ~/Uploads folder on the specified server.

The script is pasted below. Just save it somewhere convenient for you. I keep mine in ~/Scripts.


# A simple rsync helper script to send files to one of several predefined servers.
# rsync is set to use -avzP, if you want different options enter them below
# Check man rsync for a full list of options
# use: sourcefile servername destination
# if no destination is specififed, the file will be placed in the ~/Uploads folder on the server
# Credit:

# Defines the sourcefile variable as the first term entered after the script is called,
# and the servername variable as the second term entered after the script is called.


# This will convert the source file to just its file name, stripping away the directories, but leaving the extension.
# This is useful if we don't feel like specifying a destination on the server. 


# This basically says that if we don't specify a destination on the server to use, then make the 
# default destination be the ~/Uploads folder, preserving the orginal filename.
# If you want to specify a different default folder, then change the "destination=~/Uploads/$filename"  
# line to "destination=/$filename".

if [ "$#" = "2" ]; then

# If you have multiple servers to upload to then this is where you would enter them.
# Just enterer your own server information below. Check out
# the example below for guidance. You can give each of your servers a simple nickname.
# Be sure to specifiy which port your server listens on for ssh. You didn't leave it set to
# port 22 did you? ;) Note that you can also change which options rsync uses here. 
# You could set different options for each server.
# Example Server
# if [ "$servername" = "" ]; then
# 	  echo -e "Sending $filename to $servername ... PewPew!"
#	  rsync -avzP -e "ssh -p " $sourcefile @:$destination
# fi

if [ "$servername" = "server1"  ]; then
	 echo -e "Sending $filename to $servername ... PewPew!"
	 rsync -avzP -e "ssh -p 2222" $sourcefile$destination

#Server 2
if [ "$servername" = "server2" ]; then
	 echo -e "Sending $filename to $servername ... PewPew!"
	 rsync -avzP -e "ssh -p 2222" $sourcefile$destination

#Server 3
if [ "$servername" = "server3" ]; then
	 echo -e "Sending $filename to $servername ... PewPew!"
	 rsync -avzP -e "ssh -p 3333" $sourcefile$destination

exit 0

Don’t forget to make it executable with:

sudo chmod +x

And if you want to add an alias for it, just open up your ~/.bashrc in your text editor of choice and add the following line:

alias rs='/home/user/Scripts/'

Replace user with your username and the rest with where you saved your script to.

This was a fun little project I did last night. I think I’ll add a little progress bar to it next.

I Passed!

April 7, 2015

I just passed my Linux Essentials Certification exam! The Linux Academy course certainly helped, but a lot of the questions are things you’ll be forced to learn just trying to install Arch.

Hopefully my Linux hobby will be turning into my Linux career soon!

LPI Linux Essentials

LPI Linux Essentials