Ran into a unique situation while updating my VVV box after a weekend of WordPress Core and plugin development at WordCamp US this past weekend. Today, after the formal release of WordPress 4.4 I needed to update the code in my WordPress trunk directory on the VVV box. Since I have other things in progress I didn’t want to take the time to reprovision the entire box. Though, as it turn out, that would have been faster.
The issue was when I tried to do the svn up command to update the /srv/www/wordpress-trunk directory and make sure I was on the latest code. The command failed, insisting that a previous operation was incomplete. Not surprising since the connectivity at the conference was less-than-consistent. svn kindly suggest I run svn cleanup. Which I did. And was promptly met with an “invalid device cross-link” error when it tried to restore hello.php to the plugin directory.
The problem is that I develop plugins for a living. As such I have followed the typical VVV setup and have linked my local plugin source code directory to the vvv plugin directory for each of the different source directories on that box. I created the suggested Customfile on my host system and mapped the different directory paths. On the guest box, however, the system sees this mapping as a separate drive. Which it is. And, quite honestly I’m glad they have some security in place to protect this. Otherwise a rogue app brought in via the Vagrant guest could start writing stuff to your host drive. I can think of more than one way to do really bad things if that was left wide-open as a two-way read-write channel.
Comment out the mapping in Customfile on the host server. Go to your vvv directory and find that Customfile. Throw a hashtag (or pound sign for us old guys) in front of the directory paths you are trying to update with svn. In my case wordpress-trunk.
Run the vagrant reload command so you don’t pull down and provision a whole new box, but DO break the linkage to the host directory and guest directory.
Go run your svn cleanup and update on the host to fetch the latest WP code.
Go back to the host, kill the hashtag, and reload.
Hope that saves you an extra 20 minutes surfing Google, or your favorite search service, for the answer.
The Charleston Software Associates web server came crashing down again this afternoon. About once-per-month the server has been going into an unresponsive state. Today I finally had enough logging turned on to track down the issue. The problem? The Apache web server was running out of memory.
The server was not under heavy load, but just the right combination of visitors and background processes triggered critical mass. The wait times for a process to finish were long enough to start putting more things in the queue than could be emptied. Memory soon ran out and the server stopped responding.
In researching the problem I came across two things that have made a substantial impact on the performance of my WordPress site. If you are running a WordPress site, even with a limited number of visitors, you may want to employ these techniques.
W3 Total Cache
W3 Total Cache is one of the most popular and top recommended WordPress plugins. It is noted in several areas of the WordPress Codex as well as the WordPress forums and on core contributor blogs. It is a powerful site caching plugin that can yield significant improvements in page loading time.
In the simplest configuration, you can turn on page caching which will run the PHP script that builds your WordPress page and create a static HTML file. All future requests for your static pages will serve the HTML file versus loading the entire PHP and WordPress codebase. This is a significant performance boost for many sites. If your content changes or you change the style of your site, the plugin will re-generate the pages automatically.
This is just one simple way that W3 Total Cache can improve your site performance. After reviewing the technology, the configuration parameters, and the various options available, using W3 Total Cache can be a great way to improve the visitor experience on your site.
W3 Total Cache and Store Locator Plus
PHP APC is a PHP caching system that can be implemented by any PHP application. Enabling this feature is typically done at the system admin level and is a system-wide setting. Thus, this is more appropriate for people running a dedicated server. If you are on a shared server you will likely be limited to disk storage caching in plugins like W3 Total Cache or Super Cache.
After installing W3 Total Cache, I noticed settings for Opcode style caching. After some research I found the simplest way to implement the more advanced Opcode cache was to install PHP APC. PHP APC, or the Alternative PHP Cache, is a simple install on most Linux systems running PHP. On my CentOS boxes I can just run the yum install php-pecl-apc command. There is a simple command on most Linux servers. The APC module needs to special compilation, simply install and restart Apache.
Once you have PHP APC installed the easiest way to take advantage of it is to go into W3 Total Cache and enable Object Cache and set the cache type to Opcode : APC. This is the recommended option and should be used, when possible, over the database cache.
One side note here, this can be memory intensive. Thus it is best to only use the APC cache for memory-centric applications, such as the storage of PHP code modules. Thus, enabling this for object cache is a great use of APC. However, using it to store cache pages is not optimal use of the memory stack. Your WordPress site probably has more pages that are accessed on a regular basis than will fit in the memory cache, so use the disk storage setting for the page cache and reserve the APC cache for objects.
When you configure W3 Total Cache to use APC to store objects, the most often used sections of the WordPress core and more popular plugins will load into memory. Now, whenever someone is visiting your site much of the calculation and algorithmic “gyrations” that happen to build a page or to load configuration settings are already pre-calculated and stored in RAM. Through W3 Total Cache, WordPress can simply fetch the “ready to go” information directly from RAM, saving on disk I/O an dramatically increasing performance.
It should be noted the out-of-the-box, APC is set for a small-to-moderate environment. WordPress with W3 Total Cache is a bit heavier than a simple web app, so you will likely want to change the default APC parameters. You can find the settings in the php.ini file, or on newer configurations in the php.d directory in the apc.ini file. The first thing you should consider changing is the base memory size reserved for APC. The default is set to 64M which is not really enough for WordPress to make good use of it. On my site I find that 128M seems adequate.
If you are not sure about how your server is performing with the setting defaults, hunt down the apc.php script on your server (it was installed when you installed php-apc) and put it somewhere in your web server directory path. I do NOT recommend putting it in the document root, as noted in the APC hints. Instead put it in a protected sub-directory. Access it by surfing directly to the URL path where you installed apc.php.
The first thing you should look at is the “Detailed Memory Usage and Fragmentation” graph. If your fragmentation is over 50% then you probably need to adjust your working memory space or adjust which apps are using APC (in W3 Total Cache, unset Opcode : APC and use Disk Store for everything, then turn on Opcode : APC one-at-a-time).
The second thing to look at, once you’ve adjusted the default memory share, is the free versus used memory for APC. You want to have a small amount of free memory available. Too much and your server has less memory to work with for doing all the other work that is required to serve your pages, the stuff that is never cache. Too little (0% free) and your fragmentation rises.
Here is what my server looks like with the 128M setting. I have a little too much allocated to APC, but changing my setting from 128M to something like 112M isn’t going to gain me much. The 16M of extra working memory pales in comparison to the average 2.7GB I have available on the server.
On my server I noticed a few things immediately after spending 30 minutes to install W3 Total Cache and turning on/tuning APC with 128M of APC memory. This is on a dedicated server running CentOS 6.4 server with 3.8GB of RAM and 2 1.12Ghz cores.
Server Load Average went from 1.8 to 0.09.
Load average is an indicator of how badly “traffic is backed up” for the cores on your server. On a 2-core server, like mine, you can think of the load average as the “number of cars waiting to cross the 2-lane bridge”. On my server, if the number is less than 2 that means there is no bottleneck and the CPUs can operate at peak efficiency. On a 2-core system the goal is to have the number always less than 2 and preferably less than 80% of 2. At 100% utilization you consume more power and generate more heat which decreases the life span of the CPU.
Memory Consumption went from 1.8GB under light load to 0.6GB under the same load.
Page load time for the home page went from 750ms to 120ms on average.
By the way, that 18.31s spike in response time? That is when the server started tripping over itself when memory and page requests could not keep up, the server load crossed the 2.0 line around 2PM and because it was a on a light-traffic day (Saturday afternoon) it took nearly 2 hours for the traffic jam to get so bad the server just gave up trying.
The bottom line, even if you are not running a site that is getting a lot of visitors you can still improve the user experience by installing a simple plugin. If you want to take it a step further, look into the PHP APC module. Just be sure to have a backup and if possible test first on a staging environment. After all, every server configuration and WordPress installation is different. Not all plugins and themes will play well with a caching system.
We use Amazon S3 to backup a myriad of directories and data dumps from our local development and public live servers. The storage is cheap, easily accessible, and is in a remote third party location with decent resilience. The storage is secure unless you share your bucket information and key files with a third party.
In this article we explore the task of backing up a Linux directory via the command line to an S3 bucket. This article assumes you’ve signed up for Amazon Web Services (AWS) and have S3 capabilities enabled on your account. That can all be done via the simple web interface at Amazon.
Step 1 : Get s3tools Installed
The easiest way to interface with Amazon from the command line is to install the open source s3tools application toolkit from the web. You can get the toolkit from http://www.s3tools.org/. If you are on a Redhat based distribution you can create the yum repo file and simply to a yum install. For all other distributions you’ll need to fetch and build from source (actually running python setup.py install) after you download.
Once you have s3cmd installed you will need to configure it. Run the following command (not you will need your access key and secret key from your Amazon AWS account): s3cmd --configure
Step 2 : Create A Simple Backup Script
Go to the directory you wish to backup and create the following script named backthisup.sh:
# Create a tarzip of the directory
echo 'Making tarzip of this directory...'
tar cvz --exclude backup.tgz -f backup.tgz ./*
# Make the s3 bucket (ignored if already there)
echo 'Create bucket if it is not there...'
s3cmd mb s3://backup.$SITENAME
# Put that tarzip we just made on s3
echo 'Storing files on s3...'
s3cmd put backup.tgz s3://backup.$SITENAME
Note that this is a simple backup script. It tarzips the current directory and then pushes it to the s3 bucket. This is good for a quick backup but not the best solution for ongoing repeated backups. The reason is that most of the time you will want to perform a differential backup, only putting the stuff that is changed or newly created into the s3 bucket. AWS charges you for every put and get operation and for bandwidth. Granted the fees are low, but every penny counts.
Next Steps : Differential Backups
If you don’t want to always push all your files to the server every time you run the script you can do a differential backup. This is easily accomplished with S3Tools by using the sync instead of the push command. We leave that to a future article.
Follow the installation steps until you get to the Partition disks step, then:
Select Manual as the partition method.
Select the first hard drive, and agree to “Create a new empty partition table on this device?”.
Repeat this step for the second drive.
Select the “FREE SPACE” on the first drive then select “Create a new partition”.
Next, select the Size of the partition. This partition will be the swap partition, and a general rule for swap size is twice that of RAM. Enter the partition size, then choose Primary, then Beginning.
Select the “Use as:” line at the top. By default this is “Ext4 journaling file system”, change that to “physical volume for RAID” then “Done setting up partition”.
For the / partition once again select “Free Space” on the first drive then “Create a new partition”.
Use the rest of the free space on the drive and choose Continue, then Primary.
As with the swap partition, select the “Use as:” line at the top, changing it to “physical volume for RAID”. Also select the “Bootable flag:” line to change the value to “on”. Then choose “Done setting up partition”.
With the partitions setup the arrays are ready to be configured:
Back in the main “Partition Disks” page, select “Configure Software RAID” at the top.
Select “yes” to write the changes to disk.
Choose “Create MD device”.
Enter the number of active devices “2”, or the amount of hard drives you have, for the array. Then select “Continue”.
Next, enter the number of spare devices “0” by default, then choose “Continue”.
Choose which partitions to use. Generally they will be sda1, sdb1
For the swap partition choose sda1 and sdb1. Select “Continue” to go to the next step.
Repeat steps three through seven for the / partition choosing sda2 and sdb2.
Once done select “Finish”.
There should now be a list of hard drives and RAID devices. The next step is to format and set the mount point for the RAID devices. Treat the RAID device as a local hard drive, format and mount accordingly.
Select “#1” under the “RAID1 device #0” partition.
Choose “Use as:”. Then select “swap area”, then “Done setting up partition”.
Next, select “#1” under the “RAID1 device #1” partition.
Choose “Use as:”. Then select “Ext4 journaling file system”.
Then select the “Mount point” and choose “/ – the root file system”. Change any of the other options as appropriate, then select “Done setting up partition”.
Finally, select “Finish partitioning and write changes to disk”.
The installer will then ask if you would like to boot in a degraded state, select Yes.