Digital Ocean Owncloud with an sshfs Tunnel from Local Machine

April 10, 2015

Accessing or syncing your files between any device is quite popular these days, but there are a plethora of options to choose from and it’s hard to pick a definitive winner. Since btsync recently came out with version 2.sucks, I’ve been rethinking my options.

Luckily, I happened to catch Tzvi Spits on LINUX Unplugged talking about his set up: autossh tunnel from his home Arch machine to his droplet, which uses sshfs to mount his media and Seafile to serve it up with a nice Web interface.

Seafile sounds cool, but I’m already invested in OwnCloud as I’ve got it running on my own Digital Ocean Ubuntu 14.04 droplet. With only 20 Gb of storage on the droplet though, I need a way to access all of my media in OwnCloud that doesn’t involve syncing.

Plan of Attack

Basically, we’re going to use autossh to create a tunnel to our remote server from our local machine. On the remote server, we’ll use sshfs to mount a few directories from our local machine on the remote server, then we point OwnCloud to the directories mounted with sshfs. Then we’ll set up a systemd unit file so we can manage our tunnel with systemctl and enable it to start at boot (I’ll also show you how to do this with cron, if your distro doesn’t use systemd). Finally, we’ll add the sshfs mounts to the server’s /etc/fstab so they are loaded at boot. This will let us use OwnCloud on our remote server as a secure, easy to use, Web interface to access all of our media and files on the local machine.

OwnCloud

This guide assumes you already have OwnCloud installed. If you don’t have it installed yet, then I recommend you use Digital Ocean’s one-click Install for Owncloud and not have to bother with setting up a LAMP stack and installing OwnCloud. If you’d rather set things up yourself though, there’s a tuturoial for that too: How to Install Linux, Apache, MySQL, and PHP on Ubuntu 14.04. You’ll then need to follow this guide to set up OwnCloud: How to Setup OwnCloud 5 on Ubuntu 12.10. I know the versions are different, but it will still work.

If you like the idea, but don’t want to use OwnCloud, then check out Tzvi‘s guide for how to use sshfs and Seafile to access your files. He also does some of these steps differently than this guide so seeing how he accomplishes all of this might help you if this guide isn’t working for you.

ssh

First, as you might have guessed we’ll need to set up ssh. If you haven’t done this already, it’s fairly straightforward. If you’ve already done this for your server, then skip ahead. We’ll first need to install openssh on the local machine. On Arch, it’s just sudo pacman -S openssh.

Now, we need to generate a key pair so on the local machine, using ssh-keygen. I like to use it with these options:

$ ssh-keygen -t rsa -b 4096 -C "$(whoami)@$(hostname)-$(date -I)"

Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
dd:15:ee:24:20:14:11:01:b8:72:a2:0f:99:4c:79:7f username@localhost-2014-11-22
The key's randomart image is:
+--[RSA  4096]---+
|     ..oB=.   .  |
|    .    . . . . |
|  .  .      . +  |
| oo.o    . . =   |
|o+.+.   S . . .  |
|=.   . E         |
| o    .          |
|  .              |
|                 |
+-----------------+

You’ll be prompted where to save the keys and to enter a passphrase. For our purposes, just hit enter and use the defaults. You can read up on the different options on the Arch wiki for ssh keys, or just check the man pages.

Now that we’ve got our key pair generated, we’ll need to copy the public key (id_rsa.pub) to the server’s .ssh/authorized_keys file.

cat ~/.ssh/id_rsa.pub
ssh rsa AAAA ...

Select everything that cat displays for us and copy it to your clipboard (ctrl+shift+c works with most terminal emulators). Let’s ssh to the remote server now:

$ ssh -p  user@remoteserver.domain
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ nano authorized_keys
$ chmod 600 ~/.ssh/authorized_keys

Once you’re logged in, we’ll need to create the .ssh folder, if it doesn’t already exist. Next we’ll set the permissions on that folder so that only the user account has read/write/execute privileges on the .ssh folder. Then we create the authorized key file using the text editor nano. Now we paste in our public key with ctrl+shift+v. Save the file with ctrl+x. Finally we lock down the permissions on the authorized_key file itself, meaning only the owner can read/write the file. While you’re logged in, you may want to change some of the options in /etc/ssh/sshd_config on the remote serve to make it more secure (like changing the default port, allowing only certain users, etc.). Check the Configuring SSHD section in the Arch wiki for more info.

Once you’re done with that, close the ssh connect with exit and try to ssh to the remote server again. This time, it shouldn’t ask you for a password. If it does, check that your permissions are in order. If you still have trouble, then the Arch wiki has a great troubleshooting section on the ssh page. If that doesn’t solve it, turn to google because we will need the keys to work for the rest of this guide.

Everyday I’m tunnelin’

SSH tunnels let you bind a port on a remote server back to a local port so that any traffic going through the port on the remote machine, gets sent back to the local machine.

$ ssh -p222 -nNTR 6666:localhost:2222 user@104.92.86.3

In this example, -p222 specifies the ssh listening port for remote server (104.92.86.3). 6666 is the port on the server that will be tunneled back to port 2222 on our local machine. User is the username on the the remote server 104.92.86.3. Substitute the values in the example with your own and test it. Once you’ve established the tunnel from the local machine to the remote server, let’s ssh in to the remote server and verify that we can reverse tunnel back to the local machine.

$ ssh -p user@remoteserver.domain
[user@remoteserver ~]$ ssh -p6666 user@localhost
[user@localmachine ~]$ 

It works! Log out of the remote server and close the ssh tunnel. Now that we know how to set up a tunnel, let’s do it with autossh. autossh is a great tool for establishing and maintaining an ssh connection between two machines. It checks to make sure the connection is open and re-establishes it if it drops out. Let’s try to do the same thing, but this time with autossh:

$ autossh -M 0 -nNTR 6666:localhostt:2222 user@remoteserver.com -p222 -i /home/user/.ssh/id_rsa

As you can see, the command for autossh looks a little different, but it’s basically doing the same thing. Substitute your values with the ones in the example. The -p222 is the sshd listening port on the remote server still. Also, don’t forget the change user in the -i part to your username. That will be important for the next step. Once you can establish a tunnel with autossh. Double check that it works on the remote server by ssh’ing into it and enter ssh -p6666 user@localhost. Once that works, we’ll need to run the autossh command one more time as root.

$ sudo autossh -M 0 -nNTR 6666:localhostt:2222 user@remoteserver.com -p222 -i /home/user/.ssh/id_rsa

That’s why we specify the location of the identity file, so that autossh doesn’t try to look in /root/.ssh/id_rsa.pub. It will also ask you to verify that you want to add your remote server to the list of known hosts. Say yes.

Starting autossh at boot

We need a way to start ssh at boot. There are lots of ways to do this, but since Arch is drinking the systemd kool-aid, we probably should too. If you’re on a distribution that also uses systemd then these instructions should work for you too, but I’ve only tried them on Arch.

Systemd uses .service units to manage system processes. You can read more about it on the Arch wiki if you want: systemd. Let’s make a service unit for our autossh command to start at boot. Systemd keeps some unit files at /etc/systemd/system/ and that’s where we will put our autossh.service file.

$ sudo nano /etc/systemd/system/autossh.service

[Unit]
Description=AutoSSH service
After=network.target

[Service]
Environment="AUTOSSH_GATETIME=0"
ExecStart=/usr/bin/autossh -M 0 -nNTR 4321:localhostt:1234 user@remoteserver.com -i /home/user/.ssh/id_rsa

[Install]
WantedBy=multi-user.target

Hit ctrl+x to save. A couple things are worth pointing out here. First, systemd will run this as root. That’s why we had to run our autossh command as root earlier to add our remote server to the list of known hosts. Second, lots of guides for reverse tunneling out there include the -f option, which sends the command to the background and gives you control of your terminal again. That option will not work on systemd as explained here so be sure not to include it. The same effect is achieved by the Environment="AUTOSSH_GATETIME=0" line.

Now let’s test our new service file:

$ sudo systemctl daemon-reload
$ sudo systemctl start autossh

SSH into your remote server and check that the reverse tunnel still works with ssh -p6666 user@localhost. If it does then we can enable it back on the local machine with:

$ sudo systemctl enable autossh

If your distro doesn’t use systemd, then you can just do a crontab entry. Cron is a system daemon that runs processes at scheduled times or at certain events. All we need to do is add an @reboot entry with:

$ crontab -e
@reboot autossh -M 0 -f -nNTR 4321:localhostt:1234 user@remoteserver.com -i /home/user/.ssh/id_rsa

Save the entry with whatever the method is for your system editor, ctrl+x if it’s nano. If your system editor is vim, then before you can input the text, actvate insert mode by pressing “i”. Once your command is entered, hit escape to exit insert mode and then save and quit with “:wq” then “enter”. Notice that this time we included the -f flag for autossh. This will send the process to the background. Do not put the -f flag with the -nNTR options. Those are the ssh options and -f is a different option for ssh than it is for autossh.

sshfs

Now that we’ve got the reverse tunnel set up, let’s put it to work with sshfs, an awesome utility for mounting remote file systems over ssh. Let’s install it on our remote server. Since mine runs Ubuntu 14.04, here are the commands I used:

$ sudo apt-get update
$ sudo apt-get install sshfs

Once installed, we can mount folders on our local machine to our remote server. SSH into your remote server and give it a try:

$ sshfs -p5555 user@localhost:/home/user/Photos /home/user/Photos -C

This will mount the /home/user/Photos directory on the local machine to the /home/user/Photos directory on the remote server. Don’t forget to specify what port we are using for the tunnel, NOT the ssh listening port of your local machine. In this example it is 5555. The -C means to use compression. cd in to your /home/user/Photos directory on the remote server and make sure that the files are there and correspond to what’s on the local machine. If you have different usernames on the local machine and server then you might have to specify some UID options.

Since we’re going to be using OwnCloud to serve up these files later, let’s go ahead and make sure the the www-data user can acces them. Otherwise Owncloud won’t be able to see the folders.

$ sudo nano /etc/fuse.conf

Uncomment this line, or add it if it’s not there:

user_allow_other

Save and quit with ctrl+x. Now, let’s add our sshfs mount to the remote server’s /etc/fstab so that each time the server restarts it will remount our directory.

$ sudo nano /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#                
# / was on /dev/vda1 during installation
UUID=050e1e34-39e6-4072-a03e-ae0bf90ba13a /               ext4    errors=remount-ro 0       1

user@remoteserver.domain:/home/user/Photos /home/user/Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

You can add as many other mounts as you need on this file, just be sure to use the same options. if you don’t have delay_connect, it may fail to mount at boot. If you can mount the sshfs directory with sudo mount -a (the command to mount everything specified in the /etc/fstab), but it doesn’t work at boot then you need the delay_connect. The allow_other option will let other users on the system use the mounted directories which will be useful for when we get Owncloud set up.

Another thing to take note of here is that you can not have spaces in a directory name in the /etc/fstab. For example:

user@remoteserver.domain:/home/user/My Photos /home/user/My Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

Will not work and will give errors when using sudo mount -a. You might think to try /home/user/My\ Photos as you would in Bash shell, but that will not work in the /etc/fstab either. Spaces must be handled with “\040”. For example:

user@remoteserver.domain:/home/user/My\040Photos /home/user/My\040Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

To test it, reboot your server and see if your sshfs directories are there.

Owncloud External Storage

Owncloud has an awesome feature that lets you add directories that aren’t in your /var/www folder. To enable it, just log in to OwnCloud, and click the ‘Files’ drop-down menu at the top left. Then click ‘Apps’, and then the ‘Not enabled’ section. Scroll down to ‘External Storage Support’ and click the enable button.

Now click the user drop-down menu at the top right and click ‘Admin’. Scroll down to ‘External Storage’, click the ‘Add Storage’ menu and then click ‘Local’. Give your folder a name (this is what will be displayed in OwnCloud) and point to the right directory. Note that OwnCloud can handle spaces in your directory path just fine. Next make the folder available to at least your user. If you did everything right then there will be a little green circle to the left of the folder.

Head back to your files view and you should be able to browse your sshfs mounted directories. For me, it’s like having a 4TB Owncloud Droplet! Well, sort of. Access speeds aren’t that great and OwnCloud can get bogged down when searching through really huge directories (especially on the smallest droplet like I have), but for just casual Web access to your files it works great.