After running websites for over a decade now, I’ve learned it’s always nice to have a backup (or three). Sure, you can download your files every now and then, but chances are they won’t be 100% current. A few months back I decided to look into backing up WordPress sites to Amazon’s S3 service. It took a bit of research, some SSH, and some PHP coding, but I’ve had a nice workable solution up and running for a while now.
While planning to move a site to a new server, I thought I’d try to restore from one of these backups. Surely downloading and uploading 4 TAR files would be easier than downloading/uploading thousands of files and images. It was, but there was a bit of trial and error.
In an effort to speed up the process, should I ever need to restore a crashed site, I’m going to document the process here.
1. First of all, I had to download all the necessary files from my Amazon S3 account. This was basically the httpdocs folder, plus the MySQL.
2. Next, upload the files to the root web directory.
3. SSH into the site and navigate to the root of the site.
4. Combine the files, which have been split into 300MB chunks, using the ‘cat’ command. The filenames should resemble httpdocs_date.tar.gz.p00, httpdocs_date.tar.gz.p01, etc. The syntax looks like this:
cat httpdocs* > filename.tar.gz
That will combine all parts into one file named filename.tar.gz
5. Decompress the GZipped file using:
gzip -d filename.tar.gz
and then decompress the TAR file using:
tar -xvf filename.tar
6. That should restore all the files to the httpdocs directory. There’s one last thing you may need to do. You might have to change the owner of the httpdocs directory using the chown command, like this:
chown -R USERNAME httpdocs
That will let the public see the files once the database is restored (that’s a topic for another post).