Automate your database backups to Amazon S3 Cloud storage

The golden rule - always backup your data! This couldn't be more important than for the driving force behind most web sites - the database. In this article we're going to look at how you can save yourself some time by automating your database backups directly to the amazon cloud.

Before we get started...

Make sure you've got yourself set up with an Amazon S3 account, if you're not sure about the spiraling costs of such a service, check out an earlier article - how much does amazon s3 cost.

You'll also need a method of browsing your amazon buckets to view the files pushed up, you can do this directly in the amazon console or download a file browser like cloudberry.

Create a backup script to push to S3

Firstly we're going to create a backup script that can push files to the amazon S3 cloud. With a backup script in PHP this gives us the freedom to also run other tasks, later in this tutorial for example we're going to send out a confirmation email that the backup is complete.

If you're not interested in doing anything additional in your backup script it is worth mentioning you can upload to S3 directly in bash but we won't be covering that here.

Create the backups directory and a dummy backup

So to start with create a folder called backups in your application - ideally this would be in a non-browseable location, so if your site files were located here:

/var/www/vhosts/yoursite/public/

...then this would be a good location:

/var/www/vhosts/yoursite/backups/

Keep in mind if you save your backups within your pubic directory, other people may be able to access them, so you would need to ensure you set the right permissions or run a clean up script. 

For the sake of this article we'll assume the backups directory is located one directory above the public folder.

Inside the new directory, create a dummy backup that we can use to test. Create a new file titled with the following format for today's date:

database-dd-mm-yyyy.sql

Inside the file just type a hello world message so it isn't empty.

Create the backup script

Now on to the actual script. Firstly you'll need to download the amazon s3 api file - you can get this from github. The only file we actually need is S3.php.

Create a new folder in your public directory called backup. In this directory copy the S3.php file and then create an index.php file with the following code:

ini_set('display_errors',1);
error_reporting(E_ALL);

// Include the amazon library
require_once('./S3.php');

$access_key = '';
$secret_key = '';
$bucket = 'bucket/sub-directory';
$backups_dir = '../../backups/';

// Create the backup
$s3 = new S3($access_key, $secret_key);

$todays_file = 'database-' . date('d-m-Y', time()) . '.sql';

if( file_exists( $backups_dir . $todays_file ) )
{
    $s3->putObject(S3::inputFile($backups_dir . $todays_file, false), $bucket, $todays_file);
}
else
{
    echo "File doesn't exist: " . $backups_dir . $todays_file;
}

Note we'll tidy this up shortly, for now we've output any errors in PHP and shown a message if the file doesn't exist to help with any debugging. Note you'll need curl for the upload to work, if you need to set this up you can find the commands here.

To test this works, simply browse to yoursite.com/backup and your script should run. Check that the file appears in your amazon s3 bucket. 

Send email confirmation of success or failure

As we're going to be relying on this script to backup our files, next we're going to send out a little email of confirmation or failure.

// Include the amazon library
require_once('./S3.php');

$access_key = '';
$secret_key = '';
$bucket = 'bucket/sub-directory';
$backups_dir = '../../backups/';

// Create the backup
$s3 = new S3($access_key, $secret_key);

$todays_file = 'database-' . date('d-m-Y', time()) . '.sql';

if( file_exists( $backups_dir . $todays_file ) )
{
    $success = $s3->putObject(S3::inputFile($backups_dir . $todays_file, false), $bucket, $todays_file);
    if( $success )
    {
        $success_subject = 'successful';
        $message = 'File successfully uploaded to the cloud: ' . $todays_file . ' (' . filesize( $backups_dir . $todays_file ) . ' kb)';
    }
    else
    {
        $success_subject = 'failed';
        $message = 'Amazon s3 class failed to upload the file.';
    }
}
else
{
    $success_subject = 'failed';
    $message = "The file could not be found: " . $backups_dir . $todays_file;
}

// Email confirmation or failure
$to      = 'you@yoursite.co.uk';
$subject = 'Daily database backup ' . $success_subject;
$headers = 'From: alerts@yoursite.co.uk' . "\r\n" .
    'X-Mailer: PHP/' . phpversion();
mail($to, $subject, $message, $headers);

Notice any messages are now sent in the email, and we've taken out the lines that would spit out any errors. Browse to the url again to test the script (note, with this simple email sender don't be surprised if your first notification lands in your junk - just unmark as spam and add the email to your contacts list to avoid this happening every time. 

Finally, all we need to do is backup the real database and then take out the manual process of running the script...

Set up a cron job

Now that we have a script to run, we're going to set up a cron job. To do this, log on to your server via SSH.

Within the cron job, we're going to request that it calls our url daily, to do this we'll need lynx installed to load the url via the server:

apt-get install lynx-cur

Next let's set up the cron job itself - type the following into the command line:

crontab -e

This will open a list of cron jobs running for this user in '-e' edit mode.

To customise when your cron job is going to run you can use a cron tab generator. Otherwise use the following to run 5 and 10 minutes after midnight every day (append to end of file):

5 0 * * * /usr/bin/mysqldump -u DBUSERNAME -pDBPASSWORD DBNAME > /var/www/vhosts/yoursite/backups/database-$(date +%d-%m-%Y).sql
10 0 * * * lynx -dump http://yoursite.co.uk/backup

The first line will backup your database (you'll need to modify the credentials and location), and the second line will call the script to backup the generated file.

If you find your cron job isn't running, check out the setting up cron jobs post for more detailed steps, or the ubuntu troubleshooting forum.

And you're done! You now have daily database backups to the amazon s3 cloud with an email notification for peace of mind.

Sign Up

NEXT: Setting up cron jobs on linux via command line

Cron jobs allow you to run scheduled tasks on your server. This can be useful if you need to back up, clear or sort data on a regular basis. In this tutorial we're going to look at setting up a simple cron job via command line that will call a designated url each day.

comments powered by Disqus
Sign Up

Popular Tags

350x250

Need a web developer?

If you'd like to work with code synthesis on your next project get in touch via the contact page.