AWS provides an “officially unsupported” set of scripts for Windows, OSX, and Linux that will help with managing and deploying your AWS Elastic Beanstalk applications. This can be useful as I could not find a simple way to SSH into my ELB-based EC2 instance using standard methodologies. I’m sure I missed something but deploying and updating via git commands is going to be easier and my preferred production method; might as well go there now.
You will now have a directory that contains three types of command sets. In the appropriately-named eb subdirectory is a series of OS command-line scripts via “eb” commands. In the api directory is a full-fledged ruby-based implementation of very long command names that require ruby, ruby-developer, and the JSON gem to function. In AWSDevTools is and extension of git commands that add new AWS-specific scripts to the git command.
Activating “eb” Command Line
Edit your OS PATH variable to point to your unzipped download directory. I changed my unzipped directory to be something shorter and put it in my Linux root directory. To activate the eb command:
Add the path to the proper Linux Python directory (I am running 2.7.X). My CentOS .bash_profile:
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
OK, now add this to your current running Linux environment:
# . .bash_profile
It will likely come back with “no applications found”.
Setup git Tools For AWS
Yup, same idea as above. Edit your path file to include the git tool kit, but a slight twist here. Once you do that you will need to run the setup command noted below in each repository where you want AWS tools.
Edit your PATH and invoke it the double-dot-bash-trick noted above.
New tricks… go set this up in your project directory.
Your project directory is where your WordPress PHP application resides and you’ve create a git repository to manage it. You’ve already done your git init and committed stuff to the repository. Dig around this site or the Internet to find out how to do that if you’re not sure. Again, I recommend the Deploying WordPress 4.2.2 On Elastic Beanstalk, Part 1 article as it has some special Elastic Beanstalk config files in it that will be used by ELB to connect RDS dynamically and set your WP Salt values.
For this to work you are going to need to have Python (same with “eb” above) and the Python Boto library installed. I
If you don’t have boto yet, you install it on CentOS with:
# sudo yum install python-boto
Assuming you already have your WordPress stuff in a git repo, go to that directory.
In my case /var/www/html holds my WordPress install that has been put into a git repo.
# cd /var/www/wpslp/
Now setup the git extensions using this command:
If everything is setup correctly you can check the git commands with something like:
# git aws.push
It will likely come back with an “Updating the AWS Elastic Beanstalk environment None…” message.
Either that or it will update the entire Internet , or at least the Amazon store, with your WordPress code.
Combined with your ELB Environment you setup from the previous article on the subject, your are ready to go conquer the world with your new git-deployed WordPress installation on ELB.
I spent a good part of the past 24 hours trying to get a basic WordPress 4.2.2 deployment up-and-running on Elastic Beanstalk. It is part of the “homework” in preparing for the next generation of store location and directory technology I am working on . I must say that even for a tech geek that loves this sort of thing, it was a chore. This article is my “crib sheet” for the next time around. Hopefully I don’t miss anything important as I wasted hours chasing my own rear-end trying to get some things to work.
I used the Deploying WordPress with AWS Elastic Beanstalk fairly extensively for this process. It is easy to miss steps and is not completely up-to-date with the screen shots and information which makes some of it hard to follow the first time through. I will try to highlight the differences here when I catch them.
The steps here will get a BASIC non-scalable WordPress installation onto AWS. Part 2 will make this a scalable instance. If my assumptions are correct, which happens from time-to-time, I can later use command-line tools with git on my local dev box to push updated applications out the the server stack. If that works it will be Part 3 of the series on WP ELB Deployment.
The “shopping list” for getting started using my methodology. Some of these you can change to suit your needs, especially the “local dev” parts. Don’t go setting all of this up yet, some things need to be setup a specific way. This is just the general list of what you will be getting into. In addition to this list you will need lots and lots of patience. It may help to be bald; if not you will lose some hair during the process.
Part 1 : Installation
A local virtual machine. I use VirtualBox.
A clean install of the latest WordPress code on that box, no need to run the setup, just the software install.
An AWS account.
A “WP Deployment” specific AWS user that has IAM rules to secure your deployment.
AWS Elastic Beanstalk to manage the AWS Elastic Load Balancer and EC2 instances.
AWS Elasticache for setting up Memcache for improved database performance.
AWS Cloudfront to improve the delivery of content across your front-end WordPress nodes.
AWS RDS to share the main WordPress data between your Elastic Beanstalk nodes.
Creating The “Application”
The first step is to create the web application. In this case, WordPress.
I recommend creating a self-contained environment versus installing locally on your machine, but whatever you’re comfortable with. I like to use VirtualBox , sometimes paired with Vagrant if I want to distribute the box to others, with a CentOS GUI development environment. Any flavor of OS will work as the application building is really just hacking some of the WordPress config files and creating an “environment variables” directory for AWS inside a standard WP install.
// An AWS ELB friendly config file.
/** Detect if SSL is used. This is required since we are
terminating SSL either on CloudFront or on ELB */
if (($_SERVER['HTTP_CLOUDFRONT_FORWARDED_PROTO'] == 'https') OR ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https'))
/** The name of the database for WordPress */ define('DB_NAME', $_SERVER["RDS_DB_NAME"]);
/** MySQL database username */
/** MySQL database password */ define('DB_PASSWORD', $_SERVER["RDS_PASSWORD"]); /** MySQL hostname */
/** Database Charset to use in creating database tables. */ define('DB_CHARSET', 'utf8');
/** The Database Collate type. Don't change this if in doubt. */
* Authentication Unique Keys and Salts.
* Change these to different unique phrases!
* WordPress Database Table prefix.
* You can have multiple installations in one database if you give each a unique
* prefix. Only numbers, letters, and underscores please!
$table_prefix = 'wp_';
* For developers: WordPress debugging mode.
* Change this to true to enable the display of notices during development.
* It is strongly recommended that plugin and theme developers use WP_DEBUG
* in their development environments.
/* Multisite */
//define( 'WP_ALLOW_MULTISITE', true );
/* That's all, stop editing! Happy blogging. */
/** Absolute path to the WordPress directory. */
if ( !defined('ABSPATH') )
define('ABSPATH', dirname(__FILE__) . '/');
/** Sets up WordPress vars and included files. */
require_once(ABSPATH . 'wp-settings.php');
Do not move this wp-config.php file out of the root directory. It is a common security practice but it will be missing from your AWS deployment. There are probably ways to secure this by changing your target destination when setting up AWS Cloufront, but that is beyond the scope of this article.
Settings like the $_SERVER[‘RDS_USERNAME’] will come from the AWS Elastic Beanstalk environment you will create later. This is set dynamically by AWS when you attach the RDS instance to the application environment. This ensures the persistent data for WordPress, things like your dynamic site content including pages, posts, users, order information, etc. is shared on a single highly-reliable database server and each new node in your scalable app pulls from the same data set.
Settings for the “Salt” come from a YAML-style config file you will add next. This is bundled with the WordPress “source” for the application to ensure the salts are the same across each node of your WordPress deployment. This ensures consistency when your web app scales, firing up server number 3, 4, and 5 while under load.
Create a directory in the root WordPress folder named .ebextensions.
Zip up your application to make it ready for deployment.
Do NOT start from the parent directory. The zip should start from the WordPress root directory. On Linux I used the this command from the main WordPress directory where wp-config.php lives:
zip -r ../wordpress-site-for-elb.zip .
Select the default permissions (I didn’t have a choice here).
Set the Environment to PHP and Load Balancing, auto scaling.
Upload your .zip file you created above as the source for the application.
Leave Deployment Limits at their defaults. As a side note, this will create an application that you can later user for other environments, making it easy to launch new sites with their own RDS and Cloudfront settings but using the same WordPress setup.
Set your new Environment Name.
If your application name was unique you can use the default.
If your application name is “WordPress” it is likely in use on ELB, try something more unique.
Tell ELB to create an RDS instance for you.
I chose not to put his in a VPC, which is the default.
The guide I linked to above, shows a non-VPC, but then gives instructions on a VPC deployment. This caused issues.
Some instance sizes for both RDS and the EC2 instance ELB creates will ONLY run in a VPC (anything with a “t” level).
You will need to choose the larger “m-size” instances for RDS and EC2 otherwise the ELB setup will fail after 15-20 minutes of “spinning its wheels”.
Set your configuration details.
Choose an instance type of m*, I chose m3.medium the first time around, but m1.small should suffice for a small WP site.
Select an EC2 key pair to be able to connect with SSH. If you did not create one on your MAIN AWS login, got the the IAM panel and do that now. Save the private key on your local ox and make a backup of it.
The email address is not required, I like to know if the environment changed especially if I did not change it.
Set the application health check URL to
Uncheck rolling updates.
Defaults for the rest will work.
You can set some tags for the environment, but it is not necessary. Supposedly they help in reporting on account usage, but I’m not that far along yet.
Setup your RDS instance.
Again, choose an m* instance as the t* instances will not boot unless you are in a VPC.
If you choose the wrong instance ELB will “sit and spin” for something that seems to be a decade, before booting to “gray state” which is AWS terminology for half-ass and useless.
If you cannot tell, this was the most frustrating part of the setup as I tried SEVERAL different instance classes. Each time the ELB would hang and then take forever to delete.
Enter your DB username and password.
They will be auto-configured by the wp-config.php hack you made earlier.I do recommend, however, saving these somewhere in case you need to connect to MySQL remotely. I hosed my host and siteurl and needed to go to my local dev box, fire up MySQL command line, and update the wp_options table after I booted my application in ELB. Having the username/password for the DB is helpful for that type of thing.
Review your settings, launch and wait.
Reviewing ELB Settings
When you are done your Elastic Beanstalk should look something like this:
We use Amazon S3 to backup a myriad of directories and data dumps from our local development and public live servers. The storage is cheap, easily accessible, and is in a remote third party location with decent resilience. The storage is secure unless you share your bucket information and key files with a third party.
In this article we explore the task of backing up a Linux directory via the command line to an S3 bucket. This article assumes you’ve signed up for Amazon Web Services (AWS) and have S3 capabilities enabled on your account. That can all be done via the simple web interface at Amazon.
Step 1 : Get s3tools Installed
The easiest way to interface with Amazon from the command line is to install the open source s3tools application toolkit from the web. You can get the toolkit from http://www.s3tools.org/. If you are on a Redhat based distribution you can create the yum repo file and simply to a yum install. For all other distributions you’ll need to fetch and build from source (actually running python setup.py install) after you download.
Once you have s3cmd installed you will need to configure it. Run the following command (not you will need your access key and secret key from your Amazon AWS account): s3cmd --configure
Step 2 : Create A Simple Backup Script
Go to the directory you wish to backup and create the following script named backthisup.sh:
# Create a tarzip of the directory
echo 'Making tarzip of this directory...'
tar cvz --exclude backup.tgz -f backup.tgz ./*
# Make the s3 bucket (ignored if already there)
echo 'Create bucket if it is not there...'
s3cmd mb s3://backup.$SITENAME
# Put that tarzip we just made on s3
echo 'Storing files on s3...'
s3cmd put backup.tgz s3://backup.$SITENAME
Note that this is a simple backup script. It tarzips the current directory and then pushes it to the s3 bucket. This is good for a quick backup but not the best solution for ongoing repeated backups. The reason is that most of the time you will want to perform a differential backup, only putting the stuff that is changed or newly created into the s3 bucket. AWS charges you for every put and get operation and for bandwidth. Granted the fees are low, but every penny counts.
Next Steps : Differential Backups
If you don’t want to always push all your files to the server every time you run the script you can do a differential backup. This is easily accomplished with S3Tools by using the sync instead of the push command. We leave that to a future article.
The Energy Detective (TED) is a consumer based product that helps home users track their energy usage on a per-device or cross-household level. When Energy Inc, the makers or TED needed to upgrade their site with an easy-to update content management system (CMS) and the addition of a custom storefront, they came to Cyber Sprocket Labs.
Within months we had ported their old static-page driven site to our new custom site builder. They could now easily update their own content without getting developers involved, and better yet – the system protected them from inadvertently breaking their site design. The staff at Energy Inc. soon became experts at the system and added new content as well as new product models to the site.
The site also started with a simple storefront module. It allowed Energy Inc. to upload new products and track inventory levels to ensure customers knew when an item was put on backorder. The new storefront module allowed Energy Inc. to easily show and sell their wares while automating part of the order process on the back end.
Soon the orders started to roll in and Energy Inc. needed more sophisticated order tracking and management. Updates were made to add automated interfaces with FedEx for real time shipping quotes anywhere in the US and it’s territories. New order search and tracking features where added so that Energy Inc. knew what shipped, what was backordered, and what was being returned under their return merchandise authorization policy.
Energy Inc’s TED product was doing well, and the media started to notice. So did Google. As one of the first partners in Google’s new energy management program, Energy Inc. realized that their shared Linux server was not going to be able to handle the new influx of traffic. Luckily, Cyber Sprocket Labs had already been working on the Amazon Web Services cloud for more than 18 months. We knew our way around the system and helped Energy Inc. navigate the maze of cloud computing and served as a guide to the new platform. Energy Inc. decided to make the move to the nearly infinite scalability and on-demand compute environment of cloud computing.
Cyber Sprocket Labs helped migrate Energy Inc over to the Amazon Cloud in less than a week. No downtime while at the same time providing a significant boost in processing power… just in time for Google’s big announcement.
Congratulations on your success, Dolph! Glad we could be there to help get your web services off the ground!