OpenVPN in a Docker container on CentOS 7 with SystemD support

September 10, 2015

In this guide, we will set up Docker on a Digital Ocean CentOS 7 droplet, then set up OpenVPN. We’ll also discuss how to create a custom SystemD service file so that we can manage our container with systemctl commands.

VPNs (virtual private networks) are a great way to secure your internet traffic over untrusted connections. It can provide easy access to your home network file server or other machines that you don’t want to expose to the internet directly. Also, you can use it to circumvent blocked sites or services on a company network, or it can be used as a proxy to access restricted content in your country (like Hulu or Netflix).

Docker allows you to easily deploy and manage Linux containers: isolated, virtualized environments. Their isolation makes them secure and easy to manage, especially for developers who can develop their container, without worrying about which distro or even OS it will be deployed to.

This guide assumes you already have a CentOS 7 server set up. I recommend using Digital Ocean, but any provider which gives you root access will work as well. This guide should also work with any distribution that uses SystemD. I liked the set up so much, that I implemented it on my home Arch server as well.

Docker

First, let’s install docker:

sudo yum -y update
sudo yum -y install docker docker-registry

Once that’s done, we’ll start Docker and then enable it to start at boot:

sudo systemctl start docker.service
sudo systemctl enable docker.service

If you don’t want to have to type sudo every time you use the docker command, then you’ll have to add your user to the group ‘docker’. Do so with:

sudo usermod -aG docker $USER
newgrp docker

The second command will make your current session aware of your new group.

VPN

Now that Docker is up and running, we’ll need to set up busybox and OpenVPN. busybox is a super minimal docker image designed for embedded systems. We just want it for its small footprint. All we’re running is a VPN, so theres no need for extra fluff.

Get it set up with:

sudo docker run --name dvpn-data -v /etc/openvpn busybox
docker run --volumes-from dvpn-data --rm kylemanna/openvpn ovpn_genconfig -u udp://$DOMAIN:1194
docker run --volumes-from dvpn-data --rm -it kylemanna/openvpn ovpn_initpki

The first command pulls the busybox image and creates a new container called ‘dvpn-data’. The second command starts a container that will hold the configuration files and certificates. Replace $DOMAIN with the IP or domain of your server. Take note that port 1194 will need to be opened in your firewall. The thrid command will generate Diffe-Hellman parameters. It will take a long time so just be patient.

To open the required port in firewalld, issue the following command:

sudo firewall-cmd --permanent --zone=public --add-port=1194/udp

Now we need to create the credentials that will allow your client to connect to the VPN.

sudo docker run --volumes-from dvpn-data --rm -it kylemanna/openvpn easyrsa build-client-full $CONNECTION_NAME nopass
sudo docker run --volumes-from dvpn-data --rm kylemanna/openvpn ovpn_getclient $CONNECTION_NAME > $CONNECTION_NAME.ovpn

Replace $CONNECTION_NAME with whatever you want to call your VPN connection. I named mine after my server name. You will be asked to create a password during the process, just pick one. It will take a while to do some crypto stuff, but eventually you’ll get an ovpn file in your current director. This is what will allow you to add the connection to your client. You will need to securely move this to the machine that will be connecting to your vpn. rsync or scp are good options. You could even use a usb thumb drive.

Since the first machine I used this VPN for was a Mac I use at work, I chose Tunnelblick for my client. After it’s installed, double clicking on the ovpn file is all the set up that was needed to add the connection to Tunnelblick on Mac. Consult your client’s documentation if this doesn’t work for you.

Manage your new container with SystemD

Now that we’ve got all of the docker stuff out of the way, let’s create a custom systemd service file so we can manage our new container with the systemctl command. SystemD service files are like init or Upstart scripts, but can be more robust and even take the place of Cron.

In CentOS 7 and Arch, these files are kept in /etc/systemd/system/ so we’ll put our’s their too. Fire up your text editor of choice, for me it’s sudo vim /etc/systemd/system/dvpn.service, and paste in the following:

[Unit]
Description=OpenVPN Docker Container
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run --name vpn --volumes-from dvpn-data --rm -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
ExecReload=/usr/bin/docker stop && /usr/bin/docker run --name vpn --volumes-from dvpn-data --rm -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
ExecStop=/usr/bin/docker stop vpn

[Install]
WantedBy=local.target

There’s a lot going on here, so let’s break it down. The stuff in the [Unit] section is straigtforward enough. We give our service file an arbitrary description. The Requires=docker.service and After=docker.service mean that this service won’t start until after the docker service has started.

The Restart=always means that our service will restart if it fails. The ExecStart= tells systemd what to run when we start the service. Let’s break this command down further, to help you understand what’s going on here:

You can find more info about the docker run command in the Docker documentation, it has tons of options. Of course, you could just check the man page for it, with man docker run.

Finally, the [Install] section basically allows us to enable the service to be enabled to start at boot. You can read more about systemd service files in this excellent tutorial: Understanding Systemd Units and Unit Files.

Now that our service is created we can start it and enable it to load at boot with:

sudo systemctl start dvpn.service
sudo systemctl enable dvpn.service

You can also check its status with sudo systemctl status dvpn.service

And that’s it! You now have a SystemD managed, Docker controlled, OpenVPN set up. Enjoy!

UPDATE:I tried following my own guide to create a vpn on my home Arch Linux rig and ran into some problems. You might get iptables errors when attempting to start the dvpn service file created above.

┌─[jay@hal]─(~) 
└─[13:19]$ docker run --name vpn --volumes-from ovpn-data --rm -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
Error response from daemon: Cannot start container d8afcbc7069b0530893779c9abf4d10aa73ab53f820c310a8baf2b956f79877c: failed to create endpoint vpn on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p udp -d 0/0 --dport 1194 -j DNAT --to-destination 172.17.0.2:1194 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)

It is possibly due to either the xt_conntrack kernel module not loading or because you simply need to restart the firewalld and docker daemons to reload the iptables rules. Additional info can be found here. Try restarting the daemons first with:

sudo systemctl restart firewalld
sudo systemctl restart docker

If that doesn’t work, try loading the kernel module:

sudo modprobe xt_conntrack

Tags

Set Up ufw and fail2ban on Arch Linux

June 25, 2015

I recently set up ufw and fail2ban on my Arch Linux home server. It’s fairly simple, but I found that most of the guides I followed left a few things out or didn’t quite work with Arch. There are lots firewall options for Arch, but I went with ufw, as I had recently set it up on my droplet, according to this guide: How To Setup a Firewall with UFW on an Ubuntu and Debian Cloud Server and enjoyed the simple syntax.

ufw: Uncomplicated Firewall

Uncomplicated Firewall serves as a simple front end to iptables and makes it easier to set up a firewall. It’s available in the Arch repos, so let’s start by installing it:

# pacman -S ufw

Now that it’s installed, let’s configure it:

$ sudo ufw enable
$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing

These commands will turn on ufw and deny all incoming and allow all outgoing traffic. These rules won’t take effect until you restart ufw, but to be safe, the first incoming traffic we’re going to allow is our sshd port. That way, we won’t accidentally lock ourselves out once we restart ufw

$ sudo ufw allow 2222/tcp

This assumes that your sshd listening port is 2222. It’s important to change the listening port from the default 22 to help discourage brute force attacks. Since you’re reading a guide on setting up ufw and fail2ban, you probably already have your sshd configured to be safer, but if not then read up on how to on the Arch wiki.

Now open up any other ports your server might need:

$ sudo ufw allow 4040/tcp
$ sudo ufw allow 80/tcp
$ sudo ufw allow 443/tcp

These commands opened up the ports for the Subsonic music streamer and for http/https for Owncloud. If you have other services that need special ports open, allow them in the same fashion as above. Now that we’ve got all of the ports open that you will need (and double checked that you opened your sshd listening port), let’s get ufw going.

$ sudo ufw disable
$ sudo ufw enable
$ sudo systemctl enable ufw.service
$ sudo ufw status

The first two commands will restart ufw. Then we enable it to start on boot using systemd. Finally, we check to see if all of our efforts have worked. If it did, you should see something like this:

┌─[jay@hal]─(~)
└─[09:11]$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
80/tcp                     ALLOW       Anywhere
443/tcp                    ALLOW       Anywhere
4040/udp                   ALLOW       Anywhere

fail2ban

Fail2ban watches for incoming ssh requests and takes note of IPs that fail too many times to log in, and automatically bans them. This is a great way to stop trivial attacks, but doesn’t provide full protection. If you haven’t already, please set up SSH keys.

First, let’s install fail2ban:

# pacman -S fail2ban

Now we’ll need to create some custom rules and definitions for it, so it can play nice with ufw. Create a jail file for ssh: sudo nano /etc/fail2ban/jail.d/sshd.conf and insert the following:

[ssh]
enabled = true
banaction = ufw-ssh
port = 2222
filter = sshd
logpath = /var/log/auth.log
maxretry = 3

Be sure to change the port number to whatever port sshd listens to on your machine. Save it and let’s move on. Create another file: sudo nano /etc/fail2ban/action.d/ufw-ssh.conf and insert the following:

[Definition]
actionstart =
actionstop =
actioncheck =
actionban = ufw insert 1 deny from  to any app OpenSSH
actionunban = ufw delete deny from  to any app OpenSSH

Now we simply start the service, enable it to load at boot, and check its status to make sure it’s working:

$ sudo systemctl start fail2ban
$ sudo systemctl enable fail2ban
$ sudo systemctl status fail2ban

And that’s it! We now have ufw protecting us with a firewall and fail2ban banning stupid script kiddies and bots from bugging us.

Tags

WordPress Automated Backup Script with rsync and systemd

April 16, 2015

Need to back up your WordPress site automatically? Of course you do! There are loads of ways to do this, but I settled on a solution that is simple, reliable and automated.

I modified a simple script I found here and used systemd to create a daily timer and service unit to execute the script daily, and then used systemd on my local machine to mirror the backups with an rsync one-liner script. Let’s get started! Note: This guide assumes you have ssh access to the server where your WordPress is hosted. If you don’t then this guide is not for you.

The Backup Script

First, let’s take a look at the script:

#!/bin/bash

# taken from http://theme.fm/2011/06/a-shell-script-for-a-complete-wordpress-backup-4/
# a simple script to back up your wordpress installation and mysql database
# it then merges the two into a tar and compresses it

TIME=$(date +"%Y-%m-%d-%H%M")
FILE="example.com$TIME.tar"
BACKUP_DIR="/home/user/Backups"
WWW_DIR="/var/www/html"

DB_USER="user"
DB_PASS="password"
DB_NAME="wpress"
DB_FILE="example.com.$TIME.sql"

WWW_TRANSFORM='s,^var/www/html,www,'
DB_TRANSFORM='s,^home/user/Backups,database,'

tar -cvf $BACKUP_DIR/$FILE --transform $WWW_TRANSFORM $WWW_DIR

mysqldump -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE

tar --append --file=$BACKUP_DIR/$FILE --transform $DB_TRANSFORM $BACKUP_DIR/$DB_FILE

rm $BACKUP_DIR/$DB_FILE

gzip -9 $BACKUP_DIR/$FILE

exit 0

There’s a lot going on here and I won’t take the time to explain every bit of it, but basically, this script makes a tar of your /var/www/html directory (where WordPress usually lives), dumps your mysql database (where your posts are stored), appends that to the tar of your /var/www/html directory so it’s one file, adds today’s date to it, and then compresses it. If you want a more detailed explanation, read the original guide to this script here.

SSH to the server where your WordPress lives and open up your text editor of choice, I’ll use nano for this guide. First though, let’s make a place to keep your scripts:

$ mkdir ~/Scripts
$ mkdir ~/Backups
$ cd Scripts
$ nano wp.backup.sh

Paste in the the script and edit it to suit your site. Once you’re done, hit ctrl+x to save. Now we need to make the script executable. Test it, with the second command. The ./ means the current directory. Unless an executable file is in /usr/bin/ or a similar directory for your distro, then you need to specify the full path of the file to be executed. You could also type out the full path to the script, but since we’re already in the directory, we’ll use the shortcut.

$ chmod +x wp.backup.sh
$ ./wp.backup.sh

If it worked, your terminal will be flooded with all of the folder names that tar just archived. To verify, let’s move to the Backups directory and see what our script made for us:

$ cd ~/Backups
$ ls
example.com.2015-04-16-1112.tar.gz

To open up your archive, just use tar -xvf. The database directory will contain your mysql dump and the www directory contains the contents of your /var/www/html directory.

$ tar -xvf example.com.2015-04-16-1112.tar.gz
$ ls
example.com.2015-04-16-1112.tar.gz
database
www

To Cron or Not to Cron

Cron represents the easiest way to automate this script for daily backups. For example, you could just do:

$ crontab -e
@daily /home/user/Scripts/wp.backup.sh

And cron would run your script every day at midnight.

Cron is great, but since my WordPress lives on a CentOS 7 server and my local machine runs Arch (which doesn’t even come with cron by default), I wanted to use systemd to automate my script. Not only is it an opportunity to learn more about systemd, it will make things easier in the future if I want to exapnd on this simple script and backup solution.

First, let’s make a .timer file. .timer files work sort of like crontab entries. You can set them up for the familiar daily, hourly, or weekly timers or you can get super specific with it. Read the wiki for more info.

$ sudo nano /etc/systemd/system/daily.timer
[Unit]
Description=Daily Timer

[Timer]
OnCalendar=daily
Unit=wp.backup.service

[Install]
WantedBy=multi-user.target

The key part here is the OnCalendar=daily. Timers can be set to all sorts of values here, but we’ll be using daily. You could set yours to weekly or even monthly if you like. The other thing to take not of here is Unit=wp.backup.service. That’s the service file that will call our back-up script. When you’re done editing, hit ctrl+x to save. Let’s make that service file now:

$sudo nano /etc/systemd/system/wp.backup.service
[Unit]
Description=Wordpress Backup Script

[Service]
Type=simple
ExecStart=/home/user/Scripts/wp.backup.sh

[Install]
WantedBy=daily.timer

Change the ExecStart=/home/user/Scripts/wp.backup.sh to wherever you saved your script and hit ctrl+x to save. To test that it works, type:

$ sudo systemctl start daily.timer
$ sudo systemctl start wp.backup.service

Now let’s check the status of our new timer and service:

$ sudo systemctl status daily.timer && sudo systemctl status wp.backup.service

Should return something like this:

daily.timer - Daily Timer
   Loaded: loaded (/etc/systemd/system/daily.timer; enabled)
   Active: active (waiting) since Thu 2015-04-16 15:06:44 EDT; 33min ago

Apr 16 15:06:44 example systemd[1]: Starting Daily Timer
Apr 16 15:06:44 example systemd[1]: Started Daily Timer.
wp.backup.service - WordPress Backup Script
   Loaded: loaded (/etc/systemd/system/wp.backup.service; enabled)
   Active: inactive (dead) since Thu 2015-04-16 15:06:54 EDT; 33min ago
  Process: 332 ExecStart=/home/user/Scripts/wp.backup.sh (code=exited, status=0/SUCCESS)
 Main PID: 332 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/wp.backup.service

Apr 16 15:06:46 example wp.backup.sh[332]: /var/www/html/wp-content/plugins/jetpack/_inc/lib/
Apr 16 15:06:46 example wp.backup.sh[332]: /var/www/html/wp-content/plugins/jetpack/_inc/lib/markdown/
Hint: Some lines were ellipsized, use -l to show in full.

So why have two separate systemd files for what seems like one job? Well with this set up, it’s easier to add more complexity later if we need to. You could add other options for when backups should occur. You could add other scipts to be executed daily. Plus, you can call your backup script with sudo systemctl start wp.backup.service and check its status with sudo systemctl status wp.backup.service.

So now that we have our server making automatic daily backups, let’s backup the backups to an offsite location. If the server goes down, or your hosting company goes out of business, then you’ll be out of luck if your only backups live on the server that you can no longer access.

rsync

Rsync is one of my favorite linux tools. It can sync files and directories over ssh and just sync the differences in the files, making it perfect for our needs. I have an Arch linux home server that will function as the offsite backup location, but you could use another distribution. You can even use rsync on OSX, but that’s another guide.

If you haven’t done so already, you’ll need to set up ssh keys for your server and local machine. The Arch wiki page on ssh keys should have all of the info you need. There are also a plethora of guides on the interwebs that can help.

Once your ssh keys are set up, let’s set up an rsync command to pull the backup archives from the server and mirror them to our local machine:

rsync -avzP -e "ssh -p8888" user@example.com:/home/user/Backups/* /home/user/Backups/

Let’s break this down. -avzP tells rysnc to archive, use verbose mode, compress, and print the progress to the terminal, respectively. The -e "ssh -p8888" tells it that the remote server uses port 8888 for ssh. The user@example.com:/home/user/Backups/* uses the wildcard “*” to grab any files in the Backups directory on the remote server. And the last bit tells rysnc where to put the files.

Since we’re going to be using this command in a script that gets executed by systemd, we need to take into account that it will be executed as root. This can cause problems as ssh will look in the /root/.ssh directory to find the private key file and won’t find it, causing the command to fail. To fix this, let’s add an entry to our /etc/ssh_config file to define our server and tell ssh where to find the private key for it.

$ sudo nano /etc/ssh/ssh_config
Host example
    HostName example.com
    Port 8888
    User user
    IdentityFile /home/user/.ssh/id_rsa

Add the above lines to your ssh_config and substitute your information. Ctrl+x to save. Now you can ssh to your server with just ssh example. You could also do rsync -avzP example:/source/files /destination/directory.

Now that we’ve got that sorted, let’s make it a script:

$ nano ~/Scripts/wp.offsite.backup.sh
#!/bin/bash
# one-liner to backup wordpress files from jay-baker.com

rsync -avzP example:/home/user/Backups/* /home/user/Backups/

Ctrl+x to save. Don’t forget to make it executable with chmod +x wp.offsite.backup.sh, and then test it with ./wp.offsite.backup.sh. Now that we’ve got the script working, let’s set up our systemd timer and service files. These will be the same as the ones on the server, so just cut and paste those, being careful to change the name of the script. You could also change your timer to weekly if you like.

Once you’ve got them set up the way you want, then test and enable them with:

$ sudo systemctl start daily.timer
$ sudo systemctl start wp.offsite.backup.service
$ sudo systemctl enable daily.timer
$ sudo systemctl enable wp.offsite.backup.service

And that’s it! We now have automatic, daily backups of our WordPress installation and mysql database which are then mirrored to another machine with rsync: automatically and daily. From here, we could write further scripts to make monthly archives and move them to external storage or just remove any backups older than a month.

Tags

Digital Ocean Owncloud with an sshfs Tunnel from Local Machine

April 10, 2015

Accessing or syncing your files between any device is quite popular these days, but there are a plethora of options to choose from and it’s hard to pick a definitive winner. Since btsync recently came out with version 2.sucks, I’ve been rethinking my options.

Luckily, I happened to catch Tzvi Spits on LINUX Unplugged talking about his set up: autossh tunnel from his home Arch machine to his droplet, which uses sshfs to mount his media and Seafile to serve it up with a nice Web interface.

Seafile sounds cool, but I’m already invested in OwnCloud as I’ve got it running on my own Digital Ocean Ubuntu 14.04 droplet. With only 20 Gb of storage on the droplet though, I need a way to access all of my media in OwnCloud that doesn’t involve syncing.

Plan of Attack

Basically, we’re going to use autossh to create a tunnel to our remote server from our local machine. On the remote server, we’ll use sshfs to mount a few directories from our local machine on the remote server, then we point OwnCloud to the directories mounted with sshfs. Then we’ll set up a systemd unit file so we can manage our tunnel with systemctl and enable it to start at boot (I’ll also show you how to do this with cron, if your distro doesn’t use systemd). Finally, we’ll add the sshfs mounts to the server’s /etc/fstab so they are loaded at boot. This will let us use OwnCloud on our remote server as a secure, easy to use, Web interface to access all of our media and files on the local machine.

OwnCloud

This guide assumes you already have OwnCloud installed. If you don’t have it installed yet, then I recommend you use Digital Ocean’s one-click Install for Owncloud and not have to bother with setting up a LAMP stack and installing OwnCloud. If you’d rather set things up yourself though, there’s a tuturoial for that too: How to Install Linux, Apache, MySQL, and PHP on Ubuntu 14.04. You’ll then need to follow this guide to set up OwnCloud: How to Setup OwnCloud 5 on Ubuntu 12.10. I know the versions are different, but it will still work.

If you like the idea, but don’t want to use OwnCloud, then check out Tzvi‘s guide for how to use sshfs and Seafile to access your files. He also does some of these steps differently than this guide so seeing how he accomplishes all of this might help you if this guide isn’t working for you.

ssh

First, as you might have guessed we’ll need to set up ssh. If you haven’t done this already, it’s fairly straightforward. If you’ve already done this for your server, then skip ahead. We’ll first need to install openssh on the local machine. On Arch, it’s just sudo pacman -S openssh.

Now, we need to generate a key pair so on the local machine, using ssh-keygen. I like to use it with these options:

$ ssh-keygen -t rsa -b 4096 -C "$(whoami)@$(hostname)-$(date -I)"

Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
dd:15:ee:24:20:14:11:01:b8:72:a2:0f:99:4c:79:7f username@localhost-2014-11-22
The key's randomart image is:
+--[RSA  4096]---+
|     ..oB=.   .  |
|    .    . . . . |
|  .  .      . +  |
| oo.o    . . =   |
|o+.+.   S . . .  |
|=.   . E         |
| o    .          |
|  .              |
|                 |
+-----------------+

You’ll be prompted where to save the keys and to enter a passphrase. For our purposes, just hit enter and use the defaults. You can read up on the different options on the Arch wiki for ssh keys, or just check the man pages.

Now that we’ve got our key pair generated, we’ll need to copy the public key (id_rsa.pub) to the server’s .ssh/authorized_keys file.

cat ~/.ssh/id_rsa.pub
ssh rsa AAAA ...

Select everything that cat displays for us and copy it to your clipboard (ctrl+shift+c works with most terminal emulators). Let’s ssh to the remote server now:

$ ssh -p  user@remoteserver.domain
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ nano authorized_keys
$ chmod 600 ~/.ssh/authorized_keys

Once you’re logged in, we’ll need to create the .ssh folder, if it doesn’t already exist. Next we’ll set the permissions on that folder so that only the user account has read/write/execute privileges on the .ssh folder. Then we create the authorized key file using the text editor nano. Now we paste in our public key with ctrl+shift+v. Save the file with ctrl+x. Finally we lock down the permissions on the authorized_key file itself, meaning only the owner can read/write the file. While you’re logged in, you may want to change some of the options in /etc/ssh/sshd_config on the remote serve to make it more secure (like changing the default port, allowing only certain users, etc.). Check the Configuring SSHD section in the Arch wiki for more info.

Once you’re done with that, close the ssh connect with exit and try to ssh to the remote server again. This time, it shouldn’t ask you for a password. If it does, check that your permissions are in order. If you still have trouble, then the Arch wiki has a great troubleshooting section on the ssh page. If that doesn’t solve it, turn to google because we will need the keys to work for the rest of this guide.

Everyday I’m tunnelin’

SSH tunnels let you bind a port on a remote server back to a local port so that any traffic going through the port on the remote machine, gets sent back to the local machine.

$ ssh -p222 -nNTR 6666:localhost:2222 user@104.92.86.3

In this example, -p222 specifies the ssh listening port for remote server (104.92.86.3). 6666 is the port on the server that will be tunneled back to port 2222 on our local machine. User is the username on the the remote server 104.92.86.3. Substitute the values in the example with your own and test it. Once you’ve established the tunnel from the local machine to the remote server, let’s ssh in to the remote server and verify that we can reverse tunnel back to the local machine.

$ ssh -p user@remoteserver.domain
[user@remoteserver ~]$ ssh -p6666 user@localhost
[user@localmachine ~]$ 

It works! Log out of the remote server and close the ssh tunnel. Now that we know how to set up a tunnel, let’s do it with autossh. autossh is a great tool for establishing and maintaining an ssh connection between two machines. It checks to make sure the connection is open and re-establishes it if it drops out. Let’s try to do the same thing, but this time with autossh:

$ autossh -M 0 -nNTR 6666:localhostt:2222 user@remoteserver.com -p222 -i /home/user/.ssh/id_rsa

As you can see, the command for autossh looks a little different, but it’s basically doing the same thing. Substitute your values with the ones in the example. The -p222 is the sshd listening port on the remote server still. Also, don’t forget the change user in the -i part to your username. That will be important for the next step. Once you can establish a tunnel with autossh. Double check that it works on the remote server by ssh’ing into it and enter ssh -p6666 user@localhost. Once that works, we’ll need to run the autossh command one more time as root.

$ sudo autossh -M 0 -nNTR 6666:localhostt:2222 user@remoteserver.com -p222 -i /home/user/.ssh/id_rsa

That’s why we specify the location of the identity file, so that autossh doesn’t try to look in /root/.ssh/id_rsa.pub. It will also ask you to verify that you want to add your remote server to the list of known hosts. Say yes.

Starting autossh at boot

We need a way to start ssh at boot. There are lots of ways to do this, but since Arch is drinking the systemd kool-aid, we probably should too. If you’re on a distribution that also uses systemd then these instructions should work for you too, but I’ve only tried them on Arch.

Systemd uses .service units to manage system processes. You can read more about it on the Arch wiki if you want: systemd. Let’s make a service unit for our autossh command to start at boot. Systemd keeps some unit files at /etc/systemd/system/ and that’s where we will put our autossh.service file.

$ sudo nano /etc/systemd/system/autossh.service

[Unit]
Description=AutoSSH service
After=network.target

[Service]
Environment="AUTOSSH_GATETIME=0"
ExecStart=/usr/bin/autossh -M 0 -nNTR 4321:localhostt:1234 user@remoteserver.com -i /home/user/.ssh/id_rsa

[Install]
WantedBy=multi-user.target

Hit ctrl+x to save. A couple things are worth pointing out here. First, systemd will run this as root. That’s why we had to run our autossh command as root earlier to add our remote server to the list of known hosts. Second, lots of guides for reverse tunneling out there include the -f option, which sends the command to the background and gives you control of your terminal again. That option will not work on systemd as explained here so be sure not to include it. The same effect is achieved by the Environment="AUTOSSH_GATETIME=0" line.

Now let’s test our new service file:

$ sudo systemctl daemon-reload
$ sudo systemctl start autossh

SSH into your remote server and check that the reverse tunnel still works with ssh -p6666 user@localhost. If it does then we can enable it back on the local machine with:

$ sudo systemctl enable autossh

If your distro doesn’t use systemd, then you can just do a crontab entry. Cron is a system daemon that runs processes at scheduled times or at certain events. All we need to do is add an @reboot entry with:

$ crontab -e
@reboot autossh -M 0 -f -nNTR 4321:localhostt:1234 user@remoteserver.com -i /home/user/.ssh/id_rsa

Save the entry with whatever the method is for your system editor, ctrl+x if it’s nano. If your system editor is vim, then before you can input the text, actvate insert mode by pressing “i”. Once your command is entered, hit escape to exit insert mode and then save and quit with “:wq” then “enter”. Notice that this time we included the -f flag for autossh. This will send the process to the background. Do not put the -f flag with the -nNTR options. Those are the ssh options and -f is a different option for ssh than it is for autossh.

sshfs

Now that we’ve got the reverse tunnel set up, let’s put it to work with sshfs, an awesome utility for mounting remote file systems over ssh. Let’s install it on our remote server. Since mine runs Ubuntu 14.04, here are the commands I used:

$ sudo apt-get update
$ sudo apt-get install sshfs

Once installed, we can mount folders on our local machine to our remote server. SSH into your remote server and give it a try:

$ sshfs -p5555 user@localhost:/home/user/Photos /home/user/Photos -C

This will mount the /home/user/Photos directory on the local machine to the /home/user/Photos directory on the remote server. Don’t forget to specify what port we are using for the tunnel, NOT the ssh listening port of your local machine. In this example it is 5555. The -C means to use compression. cd in to your /home/user/Photos directory on the remote server and make sure that the files are there and correspond to what’s on the local machine. If you have different usernames on the local machine and server then you might have to specify some UID options.

Since we’re going to be using OwnCloud to serve up these files later, let’s go ahead and make sure the the www-data user can acces them. Otherwise Owncloud won’t be able to see the folders.

$ sudo nano /etc/fuse.conf

Uncomment this line, or add it if it’s not there:

user_allow_other

Save and quit with ctrl+x. Now, let’s add our sshfs mount to the remote server’s /etc/fstab so that each time the server restarts it will remount our directory.

$ sudo nano /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#                
# / was on /dev/vda1 during installation
UUID=050e1e34-39e6-4072-a03e-ae0bf90ba13a /               ext4    errors=remount-ro 0       1

user@remoteserver.domain:/home/user/Photos /home/user/Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

You can add as many other mounts as you need on this file, just be sure to use the same options. if you don’t have delay_connect, it may fail to mount at boot. If you can mount the sshfs directory with sudo mount -a (the command to mount everything specified in the /etc/fstab), but it doesn’t work at boot then you need the delay_connect. The allow_other option will let other users on the system use the mounted directories which will be useful for when we get Owncloud set up.

Another thing to take note of here is that you can not have spaces in a directory name in the /etc/fstab. For example:

user@remoteserver.domain:/home/user/My Photos /home/user/My Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

Will not work and will give errors when using sudo mount -a. You might think to try /home/user/My\ Photos as you would in Bash shell, but that will not work in the /etc/fstab either. Spaces must be handled with “\040”. For example:

user@remoteserver.domain:/home/user/My\040Photos /home/user/My\040Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

To test it, reboot your server and see if your sshfs directories are there.

Owncloud External Storage

Owncloud has an awesome feature that lets you add directories that aren’t in your /var/www folder. To enable it, just log in to OwnCloud, and click the ‘Files’ drop-down menu at the top left. Then click ‘Apps’, and then the ‘Not enabled’ section. Scroll down to ‘External Storage Support’ and click the enable button.

Now click the user drop-down menu at the top right and click ‘Admin’. Scroll down to ‘External Storage’, click the ‘Add Storage’ menu and then click ‘Local’. Give your folder a name (this is what will be displayed in OwnCloud) and point to the right directory. Note that OwnCloud can handle spaces in your directory path just fine. Next make the folder available to at least your user. If you did everything right then there will be a little green circle to the left of the folder.

Head back to your files view and you should be able to browse your sshfs mounted directories. For me, it’s like having a 4TB Owncloud Droplet! Well, sort of. Access speeds aren’t that great and OwnCloud can get bogged down when searching through really huge directories (especially on the smallest droplet like I have), but for just casual Web access to your files it works great.

Tags