Packages fail to build with packer

May 4, 2015

I was doing a system upgrade today for my Arch home server using packer over an ssh connection. My connection dropped out and the upgrade failed. No big deal, I figured I’ll just reconnect and try again.

Well, wine-silverlight was in the middle of upgrading when the connection dropped out and when I retried the upgrade, I got this error:

error: dlls/comctl32/icon.c: already exists in working directory
ERROR: Failed to apply patch, aborting!
==> ERROR: A failure occurred in prepare().
    Aborting...
The build failed.

There are a number of things which might cause packer to throw this error, as I learned from google. But it turns out that there is a really simple solution in my case. Packer was downloading the package files to the /tmp/packerbuild-1000/wine-silverlight directory. Something must have gotten mucked up there when the connection dropped out earlier.

A simple fix is to just remove this directory with:

$ sudo rm -R /tmp/packerbuild-1000/wine-silverlight

Then just retry your upgrade with:

$ packer -Syu

Everything should work now!

WordPress Automated Backup Script with rsync and systemd

April 16, 2015

Need to back up your WordPress site automatically? Of course you do! There are loads of ways to do this, but I settled on a solution that is simple, reliable and automated.

I modified a simple script I found here and used systemd to create a daily timer and service unit to execute the script daily, and then used systemd on my local machine to mirror the backups with an rsync one-liner script. Let’s get started! Note:This guide assumes you have ssh access to the server where your WordPress is hosted. If you don’t then this guide is not for you.

The Backup Script

First, let’s take a look at the script:

#!/bin/bash

# taken from http://theme.fm/2011/06/a-shell-script-for-a-complete-wordpress-backup-4/
# a simple script to back up your wordpress installation and mysql database
# it then merges the two into a tar and compresses it

TIME=$(date +"%Y-%m-%d-%H%M")
FILE="example.com$TIME.tar"
BACKUP_DIR="/home/user/Backups"
WWW_DIR="/var/www/html"

DB_USER="user"
DB_PASS="password"
DB_NAME="wpress"
DB_FILE="example.com.$TIME.sql"

WWW_TRANSFORM='s,^var/www/html,www,'
DB_TRANSFORM='s,^home/user/Backups,database,'

tar -cvf $BACKUP_DIR/$FILE --transform $WWW_TRANSFORM $WWW_DIR

mysqldump -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE

tar --append --file=$BACKUP_DIR/$FILE --transform $DB_TRANSFORM $BACKUP_DIR/$DB_FILE

rm $BACKUP_DIR/$DB_FILE

gzip -9 $BACKUP_DIR/$FILE

exit 0

There’s a lot going on here and I won’t take the time to explain every bit of it, but basically, this script makes a tar of your /var/www/html directory (where WordPress usually lives), dumps your mysql database (where your posts are stored), appends that to the tar of your /var/www/html directory so it’s one file, adds today’s date to it, and then compresses it. If you want a more detailed explanation, read the original guide to this script here.

SSH to the server where your WordPress lives and open up your text editor of choice, I’ll use nano for this guide. First though, let’s make a place to keep your scripts:

$ mkdir ~/Scripts
$ mkdir ~/Backups
$ cd Scripts
$ nano wp.backup.sh

Paste in the the script and edit it to suit your site. Once you’re done, hit ctrl+x to save. Now we need to make the script executable. Test it, witht the second command. The ./ means the current directory. Unless an executable file is in /usr/bin/ or a similar directory for your distro, then you need to specify the full path of the file to be executed. You could also type out the full path to the script, but since we’re already in the directory, we’ll use the shortcut.

$ chmod +x wp.backup.sh
$ ./wp.backup.sh

If it worked, your terminal will be flooded with all of the folder names that tar just archived. To verify, let’s move to the Backups directory and see what our script made for us:

$ cd ~/Backups
$ ls
example.com.2015-04-16-1112.tar.gz

To open up your archive, just use tar -xvf. The database directory will contain your mysql dump and the www directory contains the contents of your /var/www/html directory.

$ tar -xvf example.com.2015-04-16-1112.tar.gz
$ ls
example.com.2015-04-16-1112.tar.gz
database
www

To Cron or Not to Cron

Cron represents the easiest way to automate this script for daily backups. For example, you could just do:

$ crontab -e
@daily /home/user/Scripts/wp.backup.sh

And cron would run your script every day at midnight.

Cron is great, but since my WordPress lives on a CentOS 7 server and my local machine runs Arch (which doesn’t even come with cron by default), I wanted to use systemd to automate my script. Not only is it an opportunity to learn more about systemd, it will make things easier in the future if I want to exapnd on this simple script and backup solution.

First, let’s make a .timer file. .timer files work sort of like crontab entries. You can set them up for the familiar daily, hourly, or weekly timers or you can get super specific with it. Read the wiki for more info.

$ sudo nano /etc/systemd/system/daily.timer
[Unit]
Description=Daily Timer

[Timer]
OnCalendar=daily
Unit=wp.backup.service

[Install]
WantedBy=multi-user.target

The key part here is the OnCalendar=daily. Timers can be set to all sorts of values here, but we’ll be using daily. You could set yours to weekly or even monthly if you like. The other thing to take not of here is Unit=wp.backup.service. That’s the service file that will call our back-up script. When you’re done editing, hit ctrl+x to save. Let’s make that service file now:

$sudo nano /etc/systemd/system/wp.backup.service
[Unit]
Description=Wordpress Backup Script

[Service]
Type=simple
ExecStart=/home/user/Scripts/wp.backup.sh

[Install]
WantedBy=daily.timer

Change the ExecStart=/home/user/Scripts/wp.backup.sh to wherever you saved your script and hit ctrl+x to save. To test that it works, type:

$ sudo systemctl start daily.timer
$ sudo systemctl start wp.backup.service

Now let’s check the status of our new timer and service:

$ sudo systemctl status daily.timer && sudo systemctl status wp.backup.service

Should return something like this:

daily.timer - Daily Timer
   Loaded: loaded (/etc/systemd/system/daily.timer; enabled)
   Active: active (waiting) since Thu 2015-04-16 15:06:44 EDT; 33min ago

Apr 16 15:06:44 example systemd[1]: Starting Daily Timer
Apr 16 15:06:44 example systemd[1]: Started Daily Timer.
wp.backup.service - WordPress Backup Script
   Loaded: loaded (/etc/systemd/system/wp.backup.service; enabled)
   Active: inactive (dead) since Thu 2015-04-16 15:06:54 EDT; 33min ago
  Process: 332 ExecStart=/home/user/Scripts/wp.backup.sh (code=exited, status=0/SUCCESS)
 Main PID: 332 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/wp.backup.service

Apr 16 15:06:46 example wp.backup.sh[332]: /var/www/html/wp-content/plugins/jetpack/_inc/lib/
Apr 16 15:06:46 example wp.backup.sh[332]: /var/www/html/wp-content/plugins/jetpack/_inc/lib/markdown/
Hint: Some lines were ellipsized, use -l to show in full.

So why have two separate systemd files for what seems like one job? Well with this set up, it’s easier to add more complexity later if we need to. You could add other options for when backups should occur. You could add other scipts to be executed daily. Plus, you can call your backup script with sudo systemctl start wp.backup.service and check its status with sudo systemctl status wp.backup.service.

So now that we have our server making automatic daily backups, let’s backup the backups to an offsite location. If the server goes down, or your hosting company goes out of business, then you’ll be out of luck if your only backups live on the server that you can no longer access.

rsync

Rsync is one of my favorite linux tools. It can sync files and directories over ssh and just sync the differences in the files, making it perfect for our needs. I have an Arch linux home server that will function as the offsite backup location, but you could use another distribution. You can even use rsync on OSX, but that’s another guide.

If you haven’t done so already, you’ll need to set up ssh keys for your server and local machine. The Arch wiki page of ssh keys should have all of the info you need. There are also a plethora of guides on the interwebs that can help.

Once your ssh keys are set up, let’s set up an rsync command to pull the backup archives from the server and mirror them to our local machine:

rsync -avzP -e "ssh -p8888" user@example.com:/home/user/Backups/* /home/user/Backups/

Let’s break this down. -avzP tells rysnc to archive, use verbose mode, compress, and print the progress to the terminal, respectively. The -e "ssh -p8888" tells it that the remote server uses port 8888 for ssh. The user@example.com:/home/user/Backups/* uses the wildcard “*” to grab any files in the Backups directory on the remote server. And the last bit tells rysnc where to put the files.

Since we’re going to be using this command in a script that gets executed by systemd, we need to take into account that it will be executed as root. This can cause problems as ssh will look in the /root/.ssh directory to find the private key file and won’t find it, causing the command to fail. To fix this, let’s add an entry to our /etc/ssh_config file to define our server and tell ssh where to find the private key for it.

$ sudo nano /etc/ssh/ssh_config
Host example
    HostName example.com
    Port 8888
    User user
    IdentityFile /home/user/.ssh/id_rsa

Add the above lines to your ssh_config and substitute your information. Ctrl+x to save. Now you can ssh to your server with just ssh example. You could also do rsync -avzP example:/source/files /destination/directory.

Now that we’ve got that sorted, let’s make it a script:

$ nano ~/Scripts/wp.offsite.backup.sh
#!/bin/bash
# one-liner to backup wordpress files from jay-baker.com

rsync -avzP example:/home/user/Backups/* /home/user/Backups/

Ctrl+x to save. Don’t forget to make it executable with chmod +x wp.offsite.backup.sh, and then test it with ./wp.offsite.backup.sh. Now that we’ve got the script working, let’s set up our systemd tiemr and service files. These will be the same as the ones on the server, so just cut and paste those, being careful to change the name of the scipt. You could also change your timer to weekly if you like.

Once you’ve got them set up the way you want, then test and enable them with:

$ sudo systemctl start daily.timer
$ sudo systemctl start wp.offsite.backup.service
$ sudo systemctl enable daily.timer
$ sudo systemctl enable wp.offsite.backup.service

And that’s it! We now have automatic, daily backups of our WordPress installation and mysql database which are then mirrored to another machine with rsync: automatically and daily. From here, we could write further scripts to make monthly archives and move them to external storage or just remove any backups older than a month.

Tags

Digital Ocean Owncloud with an sshfs Tunnel from Local Machine

April 10, 2015

Accessing or syncing your files between any device is quite popular these days, but there are a plethora of options to choose from and it’s hard to pick a definitive winner. Since btsync recently came out with version 2.sucks, I’ve been rethinking my options.

Luckily, I happened to catch Tzvi Spits on LINUX Unplugged talking about his set up: autossh tunnel from his home Arch machine to his droplet, which uses sshfs to mount his media and Seafile to serve it up with a nice Web interface.

Seafile sounds cool, but I’m already invested in OwnCloud as I’ve got it running on my own Digital Ocean Ubuntu 14.04 droplet. With only 20 Gb of storage on the droplet though, I need a way to access all of my media in OwnCloud that doesn’t involve syncing.

Plan of Attack

Basically, we’re going to use autossh to create a tunnel to our remote server from our local machine. On the remote server, we’ll use sshfs to mount a few directories from our local machine on the remote server, then we point OwnCloud to the directories mounted with sshfs. Then we’ll set up a systemd unit file so we can manage our tunnel with systemctl and enable it to start at boot (I’ll also show you how to do this with cron, if your distro doesn’t use systemd). Finally, we’ll add the sshfs mounts to the server’s /etc/fstab so they are loaded at boot. This will let us use OwnCloud on our remote server as a secure, easy to use, Web interface to access all of our media and files on the local machine.

OwnCloud

This guide assumes you already have OwnCloud installed. If you don’t have it installed yet, then I recommend you use Digital Ocean’s one-click Install for Owncloud and not have to bother with setting up a LAMP stack and installing OwnCloud. If you’d rather set things up yourself though, there’s a tuturoial for that too: How to Install Linux, Apache, MySQL, and PHP on Ubuntu 14.04. You’ll then need to follow this guide to set up OwnCloud: How to Setup OwnCloud 5 on Ubuntu 12.10. I know the versions are different, but it will still work.

If you like the idea, but don’t want to use OwnCloud, then check out Tzvi‘s guide for how to use sshfs and Seafile to access your files. He also does some of these steps differently than this guide so seeing how he accomplishes all of this might help you if this guide isn’t working for you.

ssh

First, as you might have guessed we’ll need to set up ssh. If you haven’t done this already, it’s fairly straightforward. If you’ve already done this for your server, then skip ahead. We’ll first need to install openssh on the local machine. On Arch, it’s just sudo pacman -S openssh.

Now, we need to generate a key pair so on the local machine, using ssh-keygen. I like to use it with these options:

$ ssh-keygen -t rsa -b 4096 -C "$(whoami)@$(hostname)-$(date -I)"

Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
dd:15:ee:24:20:14:11:01:b8:72:a2:0f:99:4c:79:7f username@localhost-2014-11-22
The key's randomart image is:
+--[RSA  4096]---+
|     ..oB=.   .  |
|    .    . . . . |
|  .  .      . +  |
| oo.o    . . =   |
|o+.+.   S . . .  |
|=.   . E         |
| o    .          |
|  .              |
|                 |
+-----------------+

You’ll be prompted where to save the keys and to enter a passphrase. For our purposes, just hit enter and use the defaults. You can read up on the different options on the Arch wiki for ssh keys, or just check the man pages.

Now that we’ve got our key pair generated, we’ll need to copy the public key (id_rsa.pub) to the server’s .ssh/authorized_keys file.

cat ~/.ssh/id_rsa.pub
ssh rsa AAAA ...

Select everything that cat displays for us and copy it to your clipboard (ctrl+shift+c works with most terminal emulators). Let’s ssh to the remote server now:

$ ssh -p  user@remoteserver.domain
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ nano authorized_keys
$ chmod 600 ~/.ssh/authorized_keys

Once you’re logged in, we’ll need to create the .ssh folder, if it doesn’t already exist. Next we’ll set the permissions on that folder so that only the user account has read/write/execute privileges on the .ssh folder. Then we create the authorized key file using the text editor nano. Now we paste in our public key with ctrl+shift+v. Save the file with ctrl+x. Finally we lock down the permissions on the authorized_key file itself, meaning only the owner can read/write the file. While you’re logged in, you may want to change some of the options in /etc/ssh/sshd_config on the remote serve to make it more secure (like changing the default port, allowing only certain users, etc.). Check the Configuring SSHD section in the Arch wiki for more info.

Once you’re done with that, close the ssh connect with exit and try to ssh to the remote server again. This time, it shouldn’t ask you for a password. If it does, check that your permissions are in order. If you still have trouble, then the Arch wiki has a great troubleshooting section on the ssh page. If that doesn’t solve it, turn to google because we will need the keys to work for the rest of this guide.

Everyday I’m tunnelin’

SSH tunnels let you bind a port on a remote server back to a local port so that any traffic going through the port on the remote machine, gets sent back to the local machine.

$ ssh -p222 -nNTR 6666:localhost:2222 user@104.92.86.3

In this example, -p222 specifies the ssh listening port for remote server (104.92.86.3). 6666 is the port on the server that will be tunneled back to port 2222 on our local machine. User is the username on the the remote server 104.92.86.3. Substitute the values in the example with your own and test it. Once you’ve established the tunnel from the local machine to the remote server, let’s ssh in to the remote server and verify that we can reverse tunnel back to the local machine.

$ ssh -p user@remoteserver.domain
[user@remoteserver ~]$ ssh -p6666 user@localhost
[user@localmachine ~]$ 

It works! Log out of the remote server and close the ssh tunnel. Now that we know how to set up a tunnel, let’s do it with autossh. autossh is a great tool for establishing and maintaining an ssh connection between two machines. It checks to make sure the connection is open and re-establishes it if it drops out. Let’s try to do the same thing, but this time with autossh:

$ autossh -M 0 -nNTR 6666:localhostt:2222 user@remoteserver.com -p222 -i /home/user/.ssh/id_rsa

As you can see, the command for autossh looks a little different, but it’s basically doing the same thing. Substitute your values with the ones in the example. The -p222 is the sshd listening port on the remote server still. Also, don’t forget the change user in the -i part to your username. That will be important for the next step. Once you can establish a tunnel with autossh. Double check that it works on the remote server by ssh’ing into it and enter ssh -p6666 user@localhost. Once that works, we’ll need to run the autossh command one more time as root.

$ sudo autossh -M 0 -nNTR 6666:localhostt:2222 user@remoteserver.com -p222 -i /home/user/.ssh/id_rsa

That’s why we specify the location of the identity file, so that autossh doesn’t try to look in /root/.ssh/id_rsa.pub. It will also ask you to verify that you want to add your remote server to the list of known hosts. Say yes.

Starting autossh at boot

We need a way to start ssh at boot. There are lots of ways to do this, but since Arch is drinking the systemd kool-aid, we probably should too. If you’re on a distribution that also uses systemd then these instructions should work for you too, but I’ve only tried them on Arch.

Systemd uses .service units to manage system processes. You can read more about it on the Arch wiki if you want: systemd. Let’s make a service unit for our autossh command to start at boot. Systemd keeps some unit files at /etc/systemd/system/ and that’s where we will put our autossh.service file.

$ sudo nano /etc/systemd/system/autossh.service

[Unit]
Description=AutoSSH service
After=network.target

[Service]
Environment="AUTOSSH_GATETIME=0"
ExecStart=/usr/bin/autossh -M 0 -nNTR 4321:localhostt:1234 user@remoteserver.com -i /home/user/.ssh/id_rsa

[Install]
WantedBy=multi-user.target

Hit ctrl+x to save. A couple things are worth pointing out here. First, systemd will run this as root. That’s why we had to run our autossh command as root earlier to add our remote server to the list of known hosts. Second, lots of guides for reverse tunneling out there include the -f option, which sends the command to the background and gives you control of your terminal again. That option will not work on systemd as explained here so be sure not to include it. The same effect is achieved by the Environment="AUTOSSH_GATETIME=0" line.

Now let’s test our new service file:

$ sudo systemctl daemon-reload
$ sudo systemctl start autossh

SSH into your remote server and check that the reverse tunnel still works with ssh -p6666 user@localhost. If it does then we can enable it back on the local machine with:

$ sudo systemctl enable autossh

If your distro doesn’t use systemd, then you can just do a crontab entry. Cron is a system daemon that runs processes at scheduled times or at certain events. All we need to do is add an @reboot entry with:

$ crontab -e
@reboot autossh -M 0 -f -nNTR 4321:localhostt:1234 user@remoteserver.com -i /home/user/.ssh/id_rsa

Save the entry with whatever the method is for your system editor, ctrl+x if it’s nano. If your system editor is vim, then before you can input the text, actvate insert mode by pressing “i”. Once your command is entered, hit escape to exit insert mode and then save and quit with “:wq” then “enter”. Notice that this time we included the -f flag for autossh. This will send the process to the background. Do not put the -f flag with the -nNTR options. Those are the ssh options and -f is a different option for ssh than it is for autossh.

sshfs

Now that we’ve got the reverse tunnel set up, let’s put it to work with sshfs, an awesome utility for mounting remote file systems over ssh. Let’s install it on our remote server. Since mine runs Ubuntu 14.04, here are the commands I used:

$ sudo apt-get update
$ sudo apt-get install sshfs

Once installed, we can mount folders on our local machine to our remote server. SSH into your remote server and give it a try:

$ sshfs -p5555 user@localhost:/home/user/Photos /home/user/Photos -C

This will mount the /home/user/Photos directory on the local machine to the /home/user/Photos directory on the remote server. Don’t forget to specify what port we are using for the tunnel, NOT the ssh listening port of your local machine. In this example it is 5555. The -C means to use compression. cd in to your /home/user/Photos directory on the remote server and make sure that the files are there and correspond to what’s on the local machine. If you have different usernames on the local machine and server then you might have to specify some UID options.

Since we’re going to be using OwnCloud to serve up these files later, let’s go ahead and make sure the the www-data user can acces them. Otherwise Owncloud won’t be able to see the folders.

$ sudo nano /etc/fuse.conf

Uncomment this line, or add it if it’s not there:

user_allow_other

Save and quit with ctrl+x. Now, let’s add our sshfs mount to the remote server’s /etc/fstab so that each time the server restarts it will remount our directory.

$ sudo nano /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#                
# / was on /dev/vda1 during installation
UUID=050e1e34-39e6-4072-a03e-ae0bf90ba13a /               ext4    errors=remount-ro 0       1

user@remoteserver.domain:/home/user/Photos /home/user/Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

You can add as many other mounts as you need on this file, just be sure to use the same options. if you don’t have delay_connect, it may fail to mount at boot. If you can mount the sshfs directory with sudo mount -a (the command to mount everything specified in the /etc/fstab), but it doesn’t work at boot then you need the delay_connect. The allow_other option will let other users on the system use the mounted directories which will useful for when we get Owncloud set up.

Another thing to take note of here is that you can not have spaces in a directory name in the /etc/fstab. For example:

user@remoteserver.domain:/home/user/My Photos /home/user/My Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

Will not work and will give errors when using sudo mount -a. You might think to try /home/user/My\ Photos as you would in Bash shell, but that will not work in the /etc/fstab either. Spaces must be handled with “\040″. For example:

user@remoteserver.domain:/home/user/My\040Photos /home/user/My\040Photos fuse.sshfs delay_connect,reconnect,IdentityFile=/home/user/.ssh/id_rsa,defaults,allow_other,_netdev 0 0

To test it, reboot your server and see if your sshfs directories are there.

Owncloud External Storage

Owncloud has an awesome feature that lets you add directories that aren’t in your /var/www folder. To enable it, just log in to OwnCloud, and click the ‘Files’ drop-down menu at the top left. Then click ‘Apps’, and then the ‘Not enabled’ section. Scroll down to ‘External Storage Support’ and click the enable button.

Now click the user drop-down menu at the top right and click ‘Admin’. Scroll down to ‘External Storage’, click the ‘Add Storage’ menu and then click ‘Local’. Give your folder a name (this is what will be displayed in OwnCloud) and point to the right directory. Note that OwnCloud can handle spaces in your directory path just fine. Next make the folder available to at least your user. If you did everything right then there will be a little green circle to the left of the folder.

Head back to your files view and you should be able to browse your sshfs mounted directories. For me, it’s like having a 4TB Ownloud Droplet! Well, sort of. Access speeds aren’t that great and OwnCloud can get bogged down when searching through really huge directories (especially on the smallest droplet like I have), but for just casual Web access to your files it works great.

Tags

Soylent!

April 8, 2015

My good friend Eric gave me some Soylent to try today.

image

Soylent Meal Replacement

The taste is fairly bland, but not unpleasant. I had it instead of lunch today and it genuinely filled me up. I would definitely try it again.

Simple rsync Helper Script

If you have multiple servers that you send files over rsync to regularly then this simple little rsync script might be helpful to you.

I love using rsync, but mostly I use the same options and only a few destinations. To send a file to my homelab server, I would have to type:

rsync -avzP -e "ssh -p 1234" /location/of/source/file.txt user@notmyrealdoamin.com:/location/of/destination/file.txt

True, it’s not that much to remember, but I wanted a faster way to do it. I also wanted to specify a default “Uploads” folder so that I could just send the file and figure out where to put it later. With my script alias’d to ‘rs’ in by .bashrc, all I have to type now is:

rs /location/of/source/file.txt servernickname /destination/file.txt

And if I don’t feel like specifying a location, I can just do:

rs file.txt servernickname

This will send the file to the ~/Uploads folder on the specified server.

The script is pasted below. Just save it somewhere convenient for you. I keep mine in ~/Scripts.

#!/bin/bash

#######################################
#
# A simple rsync helper script to send files to one of several predefined servers.
# rsync is set to use -avzP, if you want different options enter them below
# Check man rsync for a full list of options
# use: rs.sh sourcefile servername destination
# if no destination is specififed, the file will be placed in the ~/Uploads folder on the server
#
# Credit: jay-baker.com
#######################################

#########################################
#
# Defines the sourcefile variable as the first term entered after the script is called,
# and the servername variable as the second term entered after the script is called.
#
#########################################

sourcefile="$1"
servername="$2"

##########################################
#
# This will convert the source file to just its file name, stripping away the directories, but leaving the extension.
# This is useful if we don't feel like specifying a destination on the server. 
#
##########################################

filename="${sourcefile##*/}"

##########################################
#
# This basically says that if we don't specify a destination on the server to use, then make the 
# default destination be the ~/Uploads folder, preserving the orginal filename.
# If you want to specify a different default folder, then change the "destination=~/Uploads/$filename"  
# line to "destination=/$filename".
#
##########################################

if [ "$#" = "2" ]; then
	destination=~/Uploads/$filename
 else
	destination="$3"
fi

##########################################
#
# If you have multiple servers to upload to then this is where you would enter them.
# Just enterer your own server information below. Check out
# the example below for guidance. You can give each of your servers a simple nickname.
# Be sure to specifiy which port your server listens on for ssh. You didn't leave it set to
# port 22 did you? ;) Note that you can also change which options rsync uses here. 
# You could set different options for each server.
#
# Example Server
# if [ "$servername" = "" ]; then
# 	  echo -e "Sending $filename to $servername ... PewPew!"
#	  rsync -avzP -e "ssh -p " $sourcefile @:$destination
# fi
#
##########################################

#Server1
if [ "$servername" = "server1"  ]; then
	 echo -e "Sending $filename to $servername ... PewPew!"
	 rsync -avzP -e "ssh -p 2222" $sourcefile user@server1.com:$destination
fi

#Server 2
if [ "$servername" = "server2" ]; then
	 echo -e "Sending $filename to $servername ... PewPew!"
	 rsync -avzP -e "ssh -p 2222" $sourcefile user@server2.org:$destination
fi

#Server 3
if [ "$servername" = "server3" ]; then
	 echo -e "Sending $filename to $servername ... PewPew!"
	 rsync -avzP -e "ssh -p 3333" $sourcefile user@server3.server.com:$destination
	
fi

exit 0

Don’t forget to make it executable with:

sudo chmod +x rs.sh

And if you want to add an alias for it, just open up your ~/.bashrc in your text editor of choice and add the following line:

alias rs='/home/user/Scripts/rs.sh'

Replace user with your username and the rest with where you saved your script to.

This was a fun little project I did last night. I think I’ll add a little progress bar to it next.

I Passed!

April 7, 2015

I just passed my Linux Essentials Certification exam! The Linux Academy course certainly helped, but a lot of the questions are things you’ll be forced to learn just trying to install Arch.

Hopefully my Linux hobby will be turning into my Linux career soon!

LPI Linux Essentials

LPI Linux Essentials

Linux Essentials Certification

April 5, 2015

I just finished up the Linux Essentials Certification training course over at linuxacademy.com. I definitely learned some new things, but it was mostly review for me at this point ;) I’m really enjoying Linux Academy so far. They even give you a little certificate pdf to print out:

linuxessentials

I take the actual certification test this Tuesday. Now I’m on to the Linux+ LPIC Level 1 cert.

SSH Permission Problems

April 3, 2015

If you’re having trouble getting SSH keys to work, then permissions may be to blame.

SSH will check permissions for the .ssh/authorized_keys files, the .ssh folder, as well as your /home/user folder before allowing authentication with keys. It makes sense because if other users could modify your .ssh folder and authorized_keys file, then they could insert their own public key and gain access to your account.

If you’ve set everything else up properly to enable SSH to authenticate via keys and it still won’t work, then check the permissions on your home and .ssh folders. If you’ve mucked them up somehow, then you can appease SSH by fixing them with these commands:

On the server:

chmod g-w /home/your_user
chmod 700 /home/your_user/.ssh
chmod 600 /home/your_user/.ssh/authorized_keys