Posted on

Configuring Apache 2.4 Connections For WordPress Sites

Recently I upgraded my web server to PHP 5.6.14. Along the way the process managed to obliterate my Apache web server configuration files. Luckily it saves them during the upgrade process, but one thing I forgot to restore was the settings that help Apache manage memory. Friday night around midnight, because this stuff ALWAYS happens when you’re asleep… the server crashed. And since it was out of memory with a bazillion people trying to surf the site; every time I restarted the server I could not log in fast enough to get a connection and fix the problem.

Eventually I had to disconnect my AWS public IP address, connect to a private address with SSH, and build the proper Apache configuration file to ensure Apache didn’t go rogue and try to take over the Internet from my little AWS web server.

Here are my cheat-sheet notes about configuring Apache 2.4 so that it starts asking site visitors to “hold on a second” when memory starts getting low. That is much nicer than grabbing more memory than it should and just crashing EVERYTHING.

My Configuration File

I put this new configuration file in the /etc/httpd/conf.d directory and named it mpm_prefork.conf. That should help prevent it from going away on a future Apache upgrade. This configuration is for an m3.large server running with 7.4GB of RAM with a typical WordPress 4.4 install with WooCommerce and other plugins installed.

# prefork MPM for Apache 2.4
# use httpd -V to determine which MPM module is in use.
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxRequestWorkers for the lifetime of the server
# MaxRequestWorkers: maximum number of server processes allowed to start
# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
# ServerLimit = sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process
# MaxRequestWorkers = number of simultaneous child processes to serve requests , must increase ServerLimit
# If both ServerLimit and MaxRequestWorkers are set to values higher than the system can handle,
# Apache httpd may not start or the system may become unstable.
# MaxConnectionsPerChild = how many requests are served before the child process dies and is restarted
# find your average requests served per day and divide by average servers run per day
# a good starting default for most servers is 1000 requests

ServerLimit 64
MaxRequestWorkers 64
MaxConnectionsPerChild 2400

The Directives

With Apache 2.4 you only need to adjust 3 directives. ServerLimit, MaxRequestWorkers (renamed from earlier versions) , and MaxConnectionsPerChild (also renamed).

ServerLimit / MaxRequestWorkers

ServerLimit sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. MaxRequestWorkers is the number of simultaneous child processes to serve requests. This seems a bit redundant, but it is an effect of using the prefork MPM module which is a threadless design. That means it runs a bit faster but eats up a bit more memory. It is the default mode for Apache running on Amazon Linux. I prefer it as I like stability over performance and some older web technologies don’t play well with multi-threaded design. If I was going to go with a more stable multi-thread environment I’d switch to nginx. For this setup setting ServerLimit and MaxRequestWorkers to the same value is fine. This says “don’t ever run more than this many web servers at one time”.

In essence this is the total simultaneous web connections you can serve at one time. What does that mean? With the older HTTP and HTTPS protocol that means every element of your page that comes from your server is a connection. The page text, any images, scripts, and CSS files are all a separate request. Luckily most of this comes out of the server quickly so a page with 20 web objects on it will use up 20 of your 64 connections but will spit them out in less than 2 seconds leaving those connections ready for the next site visitor while the first guy (or gal) reads your content. With newer HTTP/2 (and SPDY) connections a single process (worker) may handle multiple content requests from the same user so you may well end up using 1 or 2 connections even with a page with multiple objects loading. While that is an over-simplification, the general premise shows why you should update your site to https and get on services that support HTTP/2.

Calculating A Value

# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB

There you go, easy, right? Figuring our RAM resources can be complicated, but to simplify the process start with the built-in Linux free command and I suggest installing htop which provides a simpler interface to see what is running on your server. You will want to do this on your live server under normal load if possible.

Using free -m from the Linux command line will tell you the general high-level overview of your server’s memory status. You want to know how much is installed and how much is in use. In my case I have 7400MB of RAM and 2300MB was in use.

Next you want to figure out how much is in use by Apache and how much an average web connection is using per request. Use htop, filter to show only the httpd processes, and do math. My server was using 1900MB for the httpd processes. The average RAM per process was 87MB.

You can now figure out how much RAM is used by “non-web stuff” on your server. Of the 2300MB of used RAM, Apache was using up 1900MB. That means my server uses about 400MB for general system overhead and various background processes like my system-level backup service. That means on a “clean start” my server should show about 7000MB available for web work. I can verify that by stopping Apache and running free -m after the system “rests” for a few minutes to clear caches and other stuff.

Since I will have 7000MB available for web stuff I can determine that my current WordPress configuration, PHP setup, and other variables will come out to about 87MB being used for each web session. That means I can fit about 80 web processes into memory at one time before all hell breaks loose.

Since I don’t like to exhaust memory and I’m a big fan of the 80/20 rule, I set my maximum web processed to 64. 7000MB / 87MB = 80 * .8 = 64.

That is where you want to set your ServerLimit and MaxRequestWorkers.


This determines how long those workers are going to “live” before they die off. Any worker will accept a request to send something out to your site visitor. When it is done it doesn’t go away. Instead is tells Apache “hey, I’m ready for more work”. However every-so-often one of the things that is requested breaks. A bad script in PHP may be leaking memory, for example. As a safety valve Apache provides the MaxConnectionsPerChild directive. This tells Apache that after this child has served this many objects to die. Apache will start a new process to replace it. This ensures and memory “cruft” that is built up is cleared out should something go wrong.

Set this number too low and you server spends valuable time killing and creating Apache processes. You don’t want that. Set it too high and you run the risk of “memory cruft” building up and eventually having Apache kill your server with out-of-memory issues. Most system admins try to set this to a value that has it reset once every 24 hours. This is hard to calculate unless you know your average objects requested every day, how many processes served those objects, and other factors like HTTP versus HTTP2 can come into play. Not too mention fluctuations like weekend versus weekday load. Most system admins target 1000 requests. For my server load I am guessing 2400 requests is a good value, especially since I’ve left some extra room for memory “cruft”.

Posted on

HTTPS On Amazon Linux With LetsEncrypt

In order to provide faster and more secure connections to the Store Locator Web service we have added https support through Sucuri.   Adding https will allow us to take advantage of SPDY and HTTP2 which are the latest improvements to web connection technology.   There are many reasons to get your servers onto full https support.   As we learned it isn’t a one-click operation, but without too much additional effort you can get your servers running on Amazon Linux with a secured connection.   Here are the cheat sheet notes based on our experience.

EC2 Server Rules

With EC2 you will want to make sure you set your security group rules to allow incoming connections on port 443.  By default no ports are open, you already added port 80 for web support.   Make sure you go back and add port 443 as an open inbound rule.

Apache SSL Support

Next you need to configure the Apache web server to handle SSL connections.   The easiest way to get started is to install the mod_ssl library which will create the necessary ssl.conf file in /etc/httpd/conf.d/ssl.conf and turn on the port 443 listener.

# sudo service httpd stop
# sudo yum update -y
# sudo yum install -y mod24_ssl

Get Your Let’s Encrypt Certificate

This is more of a challenge if you don’t know where to start. Part of the issue is Amazon Linux runs Python 2.6 and Let’s Enrypt likes Python 2.7. Luckily there has been progress on getting this working so you can cheat a bit.

# git clone
# cd letsencrypt
# git checkout amazonlinux
# sudo ./letsencrypt-auto --agree-dev-preview --server certonly -d -d -v --debug

You may get some warnings and other messages but eventually you will get an ANSI-mode dialogue screen (welcome to 1985) that walks you through accepting terms and the certification. Answer the questions and accept your way to a new cert.

Your certs will be placed in /etc/letsencrypt/live/ , remember this path as you will need it later.

Update SSL.conf

Go to the /etc/httpd/conf.d directory and edit the ssl.conf file.

Look for these 3 directives and change them to point to the cert.pem, privkey.pem, and chain.pem file.


Restart Apache & Get Secure

No restart apache and check by surfing to https:///

# service httpd start

You may need to update various setting on your web apps especially if you use .htaccess to rewrite URLS with http or https.

Posted on

Fixing VVV svn cleanup Invalid cross-device link messages

Ran into a unique situation while updating my VVV box after  a weekend of WordPress Core and plugin development at WordCamp US this past weekend.   Today, after the formal release of WordPress 4.4 I needed to update the code in my WordPress trunk directory on the VVV box.  Since I have other things in progress I didn’t want to take the time to reprovision the entire box.  Though, as it turn out, that would have been faster.

The issue was when I tried to do the svn up command to update the /srv/www/wordpress-trunk directory and make sure I was on the latest code.   The command failed, insisting that a previous operation was incomplete.  Not surprising since the connectivity at the conference was less-than-consistent.    svn kindly suggest I run svn cleanup.  Which I did.  And was promptly met with an “invalid device cross-link” error when it tried to restore hello.php to the plugin directory.

The problem is that I develop plugins for a living.   As such I have followed the typical VVV setup and have linked my local plugin source code directory to the vvv plugin directory for each of the different source directories on that box.    I created the suggested Customfile on my host system and mapped the different directory paths.     On the guest box, however, the system sees this mapping as a separate drive.  Which it is.  And, quite honestly I’m glad they have some security in place to protect this.  Otherwise a rogue app brought in via the Vagrant guest could start writing stuff to your host drive.   I can think of more than one way to do really bad things if that was left wide-open as a two-way read-write channel.

VVV Customfile Cross Device Maker
VVV Customfile Cross Device Maker

The solution?

Comment out the mapping in Customfile on the host server.  Go to your vvv directory and find that Customfile.  Throw a hashtag (or pound sign for us old guys) in front of the directory paths you are trying to update with svn.  In my case wordpress-trunk.

Run the vagrant reload command so you don’t pull down and provision a whole new box, but DO break the linkage to the host directory and guest directory.

Go run your svn cleanup and update on the host to fetch the latest WP code.

Go back to the host, kill the hashtag, and reload.


Hope that saves you an extra 20 minutes surfing Google, or your favorite search service, for the answer.


Posted on

Boosting WordPress Site Performance : Upgrade PHP

As with every single WordCamp I’ve attended there is something new to be learned no matter how much of a veteran you are.   My 5th WordCamp at WordCamp US 2015 was no different.    There are a lot of things I will be adding to my system admin and my development tool belt after the past 48 hours in Philadelphia.

Today’s update that was just employed on the Store Locator Plus website:   Upgrading PHP.

Turns out that many web hosting packages and server images, including the Amazon Linux Image, run VERY OLD versions of PHP.    I knew that.   What I didn’t know was the PERFORMANCE GAINS of upgrading even a minor version of PHP.    PHP 5.6 is about 25% faster than PHP 5.3.    PHP 5.3 was the version I was running on this site until midnight.

WP Performance On PHP
WP Performance on PHP. Source:

The upgrade process?  A few dozen command-line commands, testing the site, and restoring the name server configurations from the Apache config file which the upgrade process auto-saved for me.  EASY.

What about PHP 7?   That is 2-3x faster.  Not 2%.  100 – 200%.   WOW!    As soon as Amazon releases the install packages for their RHEL derivative OS it will be time to upgrade.


If you are not sure what version your web server is running (it can be different than command line on you server) you can find that info in the Store Locator Plus info tab.


The take-away?   If you are not running PHP 5.6, the latest release of PHP prior to PHP 7, get on it.  One of the main components of your WordPress stack will be running a lot faster, have more bug fixes, security patches, and more.

Posted on

AWS gMail Relay Setup

After moving to a new AWS server I discovered that my mail configuration files were not configured as part of my backup service on my old server. In addition my new server is using sendmail instead of postfix for mail services. That mean re-learning and re-discovering how to setup mail relay through gmail.

Why Relay?

Cloud servers tend to be blacklisted. Sure enough, my IP address on the new server is on the Spamhaus PBL list. While Amazon allows for elastic IP addresses, a quasi-permanent IP address that acts like a static IP, which can be added to the whitelist on the Spamhaus PBL it is not the best option. Servers change, especially in the cloud. I find the best option is to route email through a trusted email service. I use Google Business Apps email accounts and have one setup just for this purpose. Now to configure sendmail to re-route all outbound mail from my server to my gmail account.

Configuring Amazon Linux

Here are my cheat-sheet notes about getting an Amazon Linux (RHEL flavor of Linux) box to use the default sendmail to push content through gmail.

Install packages needed.

# sudo su -
# yum install cyrus-sasl ca-certificates sendmail make

Create your certificates

This is needed for the TLS authentication.

# cd /etc/pki/tls/certs
# make sendmail.pem
# cd /etc/mail
# mkdir certs
# chmod 700 certs
# cd certs
# cp /etc/pki/tls/certs/ca-bundle.crt /etc/mail/certs/ca-bundle.crt
# cp /etc/pki/tls/certs/sendmail.pem /etc/mail/certs/

Setup your authinfo file

The AuthInfo entries start with the relay server host name and port.

U = the AWS server user that will be the source of the email.

I = your gmail user name, if using business apps it is likely not

P = your gmail email password

M = the method of authentication, PLAIN will suffice

# cd /etc/mail
# vim gmail-auth "U:ec2-user" "" "P:yourpassword" "M:PLAIN" "U:apache" "" "P:yourpassword" "M:PLAIN" "U:ec2-user" "" "P:yourpassword" "M:PLAIN" "U:apache" "" "P:yourpassword" "M:PLAIN"

# chmod 600 gmail-auth
# makemap -r hash gmail-auth < gmail-auth

Configure Sendmail

Edit the file and run make to turn it into a configuration file.  Look for each of the entries noted in the comments.  Uncomment the entries and/or change them as noted.    A couple of new lines will need to be added to the file.   I add the new lines just before the MAILER(smpt)dnl line at the end of the file.

Most of these exist throughout the file and are commented out.   I uncommented the lines and modified them as needed so they appear near the comment blocks that explain what is going on:

# vim /etc/mail/
define(`SMART_HOST', `')dnl
define(`confAUTH_OPTIONS', `A p')dnl
define(`confCACERT_PATH', `/etc/mail/certs')dnl
define(`confCACERT', `/etc/mail/certs/ca-bundle.crt')dnl
define(`confSERVER_CERT', `/etc/mail/certs/sendmail.pem')dnl
define(`confSERVER_KEY', `/etc/mail/certs/sendmail.pem')dnl

Add these lines to the end of just above the first MAILER()dnl entries:

<p style="padding-left: 30px;">define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl</p>
<p style="padding-left: 30px;">define(`ESMTP_MAILER_ARGS', `TCP $h 587')dnl</p>
<p style="padding-left: 30px;">FEATURE(`authinfo',`hash -o /etc/mail/gmail-auth.db')dnl</p>
<p style="padding-left: 30px;">

If you are using business apps you may need these settings to make the email come from your domain and to pass authentication based on your Gmail relay settings.    These are also in


Make the configuration-helper into a file and restart sendmail:

# make
# service sendmail restart

Configure Gmail Services

This is for business apps users, you need to turn on relay.

Go to “manage this domain” for your business apps account.

Go to “Google Apps”.

Click on “Gmail”.

Click “advanced settings”.

Find the “SMTP relay service” entry.    Add a  new entry.

Only addresses in my domain, require SMTP, require TLS all need to be selected.

Give it a name.


Save again.

Posted on

Windows Azure Virtual Machines, Not Ready For Prime Time

Just last month, Microsoft announced that their Windows Azure Virtual Machines were no longer considered a pre-release service.  In other words, that was the official notification from Microsoft that they feel their Virtual Machines offering is ready for enterprise class deployments.   In fact they even offer uptime guarantees if you employ certain round-robin and/or load balancing deployments that help mitigate the downtime in your cloud environment.

Essentially the Virtual Machines offering on Windows Azure equates to a virtual dedicated server that you would employ from most hosting companies.  The only different with the Windows Azure platform, like most cloud-based offerings, is that you need to serve as your own system admin.   This is not web hosting for business owners but for tech geeks.    In other words, it works perfect for guys like me.

Or so I thought.

Different Shades of White

As I learned tonight, there are differences between the various cloud offerings that are not easy to tease out of the hundreds of pages of online documentation touting how awesome a service provider’s cloud services are.   Sure, there are the metrics.  You can compare instance sizes in terms of disk space, CPU, and bandwidth.   You can comparing pricing and the relative costs of operating your server on each of the cloud platforms.    You can even get the background information on the company providing the virtualized environment, getting some clue (though never a clear picture) of where the servers are physically located, how many servers they have, how secure the environment is, and more.

At the end of the day they all look very similar.  Sure there are discrete elements you can point to on each comparison spreadsheet you throw together, but in the end the differences are relatively minor.   They pricing is similar.   The network and server room build-outs are similar.   The support offerings look similar.     When all is said-and-done you end up making a choice based on price, the reputation of the company, the quality of the online documentation, and the overall user interface experience (UX) that is presented during your research.

After a lot of research, and with quite a bit of experience with Amazon Web Services, all the cloud based offerings were very similar.   Different shades of white.     In the end I decided to try the Microsoft Windows Azure offering.    Microsoft has a good reputation in the tech world, they are not going anywhere, and as a Microsoft Bizspark member I also have preview access and discount services.

My decision to go against the recommendations I’ve been making to my clients for years, “Amazon was one of the first, constantly innovates, and is the leader in the space”, was flawed.    Yes, I tested and evaluated the options for months before making the move.   But it takes an unusual event to truly test the mettle of any service provider.

Breaking A Server

After following the advice of a Microsoft employee that was presented in a Windows Azure forum about Linux servers, I managed to reset the Windows Azure Linux Agent (or WALinuxAgent) application.    No, I did not do this on a whim.   I needed to install a GUI application on the server and followed the instructions presented.  It turns out that Microsoft has deployed a custom application that allows their Azure management interface to “talk” to the Linux server.  That same application DISABLES the basic NetworkManager package on CentOS.  To install any kind of GUI applications or interface you must disable WALinuxAgent, enable NetworkManager, install, disable NetworkManager, then re-enable WALinuxAgent.  The only problem with the instructions that are published in several places is they omit a very important step.  While connected with elevated privileges (sudo or su) you must DISABLE the WALinuxAgent (waagent) provisioning so that it does not employ the Windows Azure proprietary security model on top of your installation.  If you do not do this  and you log out of that elevated privs session y ou will NEVER have access to an elevated privs account again.

Needless to say, you cannot keep an enterprise level server running in this state.  Eventually you need to install updates and patches for security or other reasons.

As I would learn, there is ZERO support on recovering from this situation.

Support versus support

In the years of working with Amazon Web Services and hosting a number of cloud deployments on their platform, I had come accustomed to being able to gain access to support personnel that actually TRY to help you out.   They often go above-and-beyond what is required by contract and try to either get you back on track through their own efforts of at least provide you with enough research and information that you can recover from any issues you have with limited effort.    Amazon support services can be pricey, but having access to not just the level one but also higher level techs is an invaluable resource.

The bottom line is that Microsoft offers NO support services for their Linux images, even those they provide as “sanctioned images”, beyond making sure the ORIGINAL image is stable and that the virtual machine did not crash.    Not only do they not have any apparent means to elevate support tickets, as it turns out there is NO SUPPORT if you are running a Linux image.

Clearly Microsoft does not put this “front and center” on ANY of their Windows Azure literature.  In fact, just the opposite.  Microsoft has made an extended effort in all their “before the purchase” propaganda to try and make it sound like they EMBRACE Linux.   They go out of their way to make you feel like Linux is a welcome member of their family and that they work closely with multiple vendors to ensure a top-quality experience.

Until you have a problem.   At which point they wash their hands, as is evident in this support response along with a link to the Knowledgebase article saying “Linux.  Not our problem.”:

Hello Lance, I understand your concerns and frustration, but Microsoft does not offer technical support for CentOS or any other Linux OS at this time.

 Please, review guidelines for the Linux support on Windows Azure Virtual Machines:

No Azure Support
No Azure Support

Other Issues

While the lack of support and the inability to regain privileged user access to my server is the primary concern that has me on the path of choosing a new hosting provider, there have been other issues as well.

A few times in the past several months the WordPress application has put Apache in a tailspin.  This consumes the memory on the server.   While that is not necessarily an issue with Windows Azure, the fact that the “restart virtual image” process DOES NOT WORK at least 50% of the time IS a big issue.   Windows Azure is apparently overly-reliant on that dreaded WALinuxAgent on the server.   If it does not response, because memory is over-allocated for example, the server will not reboot.   The only thing you can do is press the restart button, wait 15 minutes to see if it happened to get enough memory to catch the restart command, and try again.  Ouch.

The Azure interface is also not as nice as I first thought.   While better than the original UX at Amazon Web Services, it is overly simplistic in some places and downright confusing in others.  Try looking at your bill.  Or your subscription status.   You end up jumping between seemingly dis-jointed sites.    Forget about online support forums.  Somehow you end up in the MSDN network, far removed from your cloud portal.    I often find myself with a dozen windows open so I can keep track of where I was or what I need to reference, lest I lose my original navigation path and have to start over.   Not too mention the number of times that this site-to-site hand-off fails and your login is suddenly deemed “invalid” mid-session.

Azure Session Amensia
Azure Session Amensia

Moving Servers

So once again, I find myself looking for a new hosting provider. Luckily I recently made the move to Windows Azure and not only have VaultPress available to make it easy to relocate the WordPress site but also Crash Plan Pro to get all the “auxiliary” installation “cruft” moved along with it.

Where will I go?

In my mind there are only two choices for an expandable cloud deployment running Linux boxes. Amazon Web Services or Rackspace. I’ll likely end up with Amazon again, but who knows… maybe it is time to try the legendary support at Rackspace once again. We’ll see. Stay tuned.

Posted on

Hosting WordPress

I get a lot of questions about where to host a WordPress site.   While I’ve not found the “perfect host for all people”, I have learned a few things about who NOT to use , who I use, and who I *think* will be good to use based on your needs.

Let’s start with who to stay away from:


DO NOT host with GoDaddy.

Besides my personal issues with their support of national policies that hamper an open Internet, they also have notable technical issues.    Just last fall they mis-configured a router and took tens-of-thousands of businesses offline for several days.  No, it was not Anonymous as first reported.  It was incompetence.  Even if you were not hosting at GoDaddy but had names served by the GoDaddy DNS service your site could have been impacted. Mine site was offline for several days.

The bad part was not that the sites went offline.  That happens.  It shouldn’t, but it does.   The thing that made GoDaddy suck beyond normal suck-itude, was the fact that after several attempts to contact them they ignored ALL communication.  No offer of a credit for the down time.  Nothing other than a blanket generic email saying “our stuff broke, we fixed it”.   Thanks GoDaddy.  My site, as well as thousands of others lost hundreds, if not thousands, of dollars in revenue and your only response was a generic bulk email saying “my bad”.

Even more troublesome is the fact that I’ve been doing business with GoDaddy for over a decade, was a reseller for years, and brought them hundreds of name service and hosting clients over the years.  They can’t even take 2 seconds to respond with a personal email.  Sad.

Enough about the stories of how bad their service is.  The big issue and the main reason I do not recommend them for hosting is the fact that in 8-of-8 paid support requests where the client was having issues and was hosted at GoDaddy, we traced the problem to being hosted on GoDaddy in EVER CASE.   Permissions are configured differently on different servers.  IP addresses are shared en-masse which makes geocoding lookups essentially useless.  Servers time out when overloading, breaking the AJAX listener.

In short, if you want your WordPress stuff to work, do not host on GoDaddy.


Do not host with Liquidweb.

I used them for years.  I rented, and still do, a dedicated server there.   I have used their virtual private server and have brought many clients to Liquidweb.  For years their service and prices were above par.    In the past 4 years it has been getting worse every year.

3 years ago, they crashed my dedicated server with a hard fault.  It took them 5 days to get it back online, for a multi-million-dollar software consulting firm.  They had a team working on it, which was good, but it was obvious their claims of “warm server” and “4 hour maximum down time” were false.   They had to order new hardware, wait for it to arrive, configure it, then move our stuff.   After all that the new server was NOT configured the same way which incurred weeks of “oh, that’s broke too”.

This past fall they crashed a new VPS server that was hosting my account.   It also crashed several client accounts.    All the sites on that server were offline for days.   They eventually got it fixed and I was given access to a top-level support rep, but they never did offer any form of compensation for the down time.    Again, the newly configured server was not configured the same way as the old server and stuff never worked right after that.    When I finally showed them that their server was not limiting or allocating resources they told me “your site is too big for the server”.  Really?   I moved it and the new server, which is smaller, runs at less than 10% maximum CPU usage, 25% peak memory usage, and 1% disk I/O usage.

They also made access to any real support basically impossible.   They put tickets in a generic pool and let any tech resolve them. Sometimes you get a guy with a clue, most times not.    I should not be educating my server admin on how to admin a server.

Microsoft Azure

This is who I use today.    I have several virtual machines running there.    I like the simple interface much more than the Amazon Web Services interface.  It is also slightly less expensive than Amazon services.   However you must be a tech geek (or know one) to use these services.  It is much like running your own server.   If you are not a server admin this is not for you.

If you ARE a server admin, or have on on staff, then you may qualify for Microsoft Bizspark.  This will give you free (or near-free) Azure services for several years.   You can also scale up or down the server as needed with relative ease.    If you are comfortable configuring your base operating system (I use CentOS), installing PHP, MySQL, WordPress and the other components, and managing security then Azure is a fully flexible and expandable platform for a WordPress site.

This type of setup is only for uber geeks or companies that employ them.


I have not used ClickHost myself, however I spoke to many people at WordCamp Atlanta and the general work about ClickHost was that they get WordPress hosting.   They seem like nice people and do seem to go the extra mile to make sure you will be taken care of.    They give you a pre-configured hosting account with the WordPress goodies installed.  Even better, they are very affordable.  A basic setup can cost you as little as $50/year.

For my clients that are cost-aware I will be recommending ClickHost.


If you want a site that never crashes, use RackSpace.  You will pay top-dollar but they have very responsive support and know how to manage servers.  I’ve not used them personally but I know several clients that have used them in the past.  Their support is top-notch and they know their stuff (or have access to someone that does).   They are not cheap, but if you want high performance and high reliability this is a good option.    I’m not familiar with their newer virtualized offerings, which are lower costs, but I have to imagine they are good enough to carry the RackSpace name and reliability image.

Posted on

Building A Site for Digital Content Sales

Today I received an email from a friend asking if I could help someone he knows in building a website.   The request is simple, help build a website that connects to social media and allows for registered users to download a paper he has written, keeping track of these registrations as leads.

The immediate answer is easy.

Use WordPress.

Ok, so maybe too simple an answer.   WordPress is way beyond a simple blogging platform.  It is a complete website and even a web applications building platform.   Take the website, glue on the right theme, add a few plugins, configure.  Done.

Far easier than 20 years ago when I built my first web engine for an ecommerce site,  writing thousands of lines of Perl code. It just about took a Phd in computer science to build a site like that back then.  Today, WordPress… click, click, type some settings, click… write some content… done.   But how do you get there?

Step 1: Pick A Host

This has come up TWICE today, so I’ll tell you who I use then tell you who I would and would not go with for most sites.

First, what I use.   Microsoft.  Yup, them.   Running a Linux server.    CentOS 6-something.  In a virtual dedicated server setup.  I know, I know… Microsoft and Linux?   Yeah.  And it didn’t even burst into flames within moments of doing the install.    So how does that work?     Microsoft has a service they call Windows Azure.   Don’t let the name confuse you.    “Azure”, as I like to call it, is basically the Microsoft equivalent of the Amazon Web Services environment.   In other words “cloud computing”.   It is NOT just Windows.

A Slight Diversion : Cloud Servers

What is the cloud?  A fancy name for remote computers and web services.  Really no different than rented servers from any other ISP, but today the term “cloud” tends to refer to any online service that gives you a simple web interface and programming APIs to control the resource.  This includes web hosting and web servers.   Just like the web servers you’d rent from an “Internet Presence Provider” (IPP) 5 years ago.   The only real difference here is they tend to put an emphasis on using virtual machines, just like those you run on a desktop like VMWare or Virtual Box.

That said, there are basically the same options with “cloud computing”, like “the cloud” provided by Amazon and Microsoft, as there are with renting a server.  You can get a website-only plan, a shared hosting plan, and a dedicated hosting plan.   This is sometimes called something different like “virtual private server” and “virtual dedicated server”.

In my opinion, if you are doing cloud computing then you really should be only looking at Virtual Dedicated Servers.  Otherwise just eliminate the confusion of “cloud computing” and go with a standard host.

If your website is going to be HUGE and you are going to get tens-of-thousands of unique visitors (uniques) every day or will have highly variable traffic with peaks of tens-of-thousands of uniques/day, then investigate and learn cloud hosting and dedicated cloud servers.

For the rest of you…

Back To Hosting

Ok, so I use a  Windows Azure virtual dedicated server running Linux.  But I’m a tech geek.  I know system security, system administration, and coding.  I can manage my server without any issues.

However, for a typical hosting company where you may need some assistance and do NOT need  your site to carry a super-heavy load, there are other options.    However, before I make a recommendation here are some companies I would stay away from for various reasons.

Do NOT use:

  • GoDaddy.   Way too many people have problems with GoDaddy hosted sites.   I cannot tell you how many broken sites of clients and customers were fixed when they left GoDaddy.    I also cannot tell you how incompetent it was for GoDaddy to take down MILLIONS of sites for several DAYS because they cannot configure a network router.   Then they refused any form of compensation to anyone.  I don’t even host with GoDaddy but my domain name is registered there and they took me offline for days.   This is NOT the first time this has happened in the past 12 months.   Not too mention most of their support staff is clueless.

  • LiquidWeb.   They  used to be one of my favorites.  As they have grown in size they too have grown in incompetence.  They cannot run a shared server properly to save their life.   I often found myself training their support staff.   They too have crashed my dedicated hardware, my shared server, and those of several customers for days-on-end.  No compensation and no apologies in most of those cases.
  • 1-And-1.   I’ve had no personal experience other than through my clients.  Mis-configured network routing.  Inability to fix blatant DNS issues.  Crashed servers.  Less performance that advertised.  Difficult to get in touch with competent support.  I’ve been paid good money to PROVE that 1-and-1 was the source of several major problems for clients for 1-an-1 to finally admit the issue was theirs then take weeks to address the problem.

Ok… so you know who to stay away from.   Who to use?

Well there are 2 companies I don’t have personal experience with but I’ve heard good things about.  The first, I only know about through casual conversation and what other people said about them.   The other is one many clients, with deep pockets, have used and swear by them.  I’m aware of them but have not used them personally.   In either case I think you are in good hands.

  • ClickHost.  They sponsored WordCamp Atlanta.  Already bonus points there.  They KNOW WordPress and love it.   If you are doing a WordPress site they seem like a perfect it.  Reasonably priced and WordPress knowledgeable.  Plus they just seem like cool people.

  • RackSpace.  They are the “100% guaranteed up time” people.   And from what I here they NEVER go offline.   They also have top-notch support.  And you pay for it.   Probably the most costly of the hosts  that are out there, but if your site can NEVER go down, they have a reputation for pulling that off.   Unless you screw it up yourself.  Then they try to help you fix it.

Step 2: Install WordPress

If you use someone like ClickHost, this is a few clicks and a couple of web-form questions away from being online.   Easy.

If you “go on your own” then you download WordPress, setup the MySQL database, and install via web forms.  Once you get MySQL setup, the 15-minute part of the “famous 15 minute install”, then the WordPress install really is just 15 minutes.  Very cool.

Step 3 : Themes

The harder part now is selecting a theme.    Themes are the skin of the site.  How it looks. There are tens-of-thousands of them online.  There are dozens within the free themes directory on WordPress.  There are a lot more out there in various online stores.  Some are free, some are paid.

But one thing most people overlook?   Themes are not just a pretty face.   MOST come with built-in functionality and features.  Think of it as a skin plus some cool functional elements added in.  While not all themes add functions or features to the site, many do.  Especially premium ones.

It is often easier to find a theme that does 90% of what you want and then add a few plugins.    Finding a theme that LOOKS cool, but does JUST that then adding 20 plugins is often a more difficult route.   If you follow my other threads you’ll know why.  Many plugins in the free directory at WordPress are abandoned.  Some don’t work well.  Others just don’t work.   Don’t let me scare you, plenty are GREAT and work perfectly.  You just need to “separate the wheat from the chafe” and that can take some time.

My recommendation?  Start with WooThemes.  I’ve found they have the best quality themes out there and more importantly, they actually ANSWER SUPPORT QUESTIONS.   Many themes, including premium ones, skip the later point which can be critical in getting a site online.      How to avoid at all costs?  Envato’s Theme Forest.  I’m sure they have a few good themes in the hundreds the promote, but the chances are finding those few are just too low.   Of the 10 “your plugin is broken” messages I get every month, 9 of them (or 10) are from someone using a Theme Forest theme that is horribly written and just plain breaks everything in their way.  Including plugins.   DO NOT use Theme Forest stuff.

Ok.  So you’ve got a theme, it does what you want and/or looks cool.       Now what?

Step 4: Plugins

Go find a few plugins that do what you want.  Start in the free WordPress plugins directory but widen your search to the premium plugins.  Unfortunately there are not a lot of good premium plugin sites out there.  However many of the better free plugins on the WordPress directory have premium upgrades.

Again, in the  3rd party market stay away from Envato’s Code Canyon.   While they offer a few good plugins there are far too many bad ones in the mix.    Not to hammer Envato too hard, they have a good idea but they SUCK at quality control.  They are obviously just playing a numbers game and going for volume over quality.

Got It, But For My Site?

Now you know the components, here is where I would start to build a site like the one described initially.

1) Host with ClickHost.  Small host package is probably fine.

2) Install WordPress 3.5.1 (or whatever the latest version is today).

3) Install WooCommerce as a plugin.  It is in the free directory and you can find it right from the WordPress admin panel by searching “woocommerce” under plugins.

4) Go to WooThemes and find a WooCommerce compatible theme that you like.

5) Go to WooThemes and look at the WooCommerce extensions.  There are several for doing subscriptions and digital content delivery.  They are premium add-ons but relatively inexpensive.

6) Add JetPack to your site.  It is a WordPress plugin from the guys that build WordPress.   It adds a bunch of cool features that you can turn on/off without much effort.  Mostly the social sharing and publishing tools are what we are looking for here.

7) Add VaultPress.  Also from “the WordPress people”. This is your site backup.  You want this.  Trust me, the $15/month is worth it the first time you break your site or it gets hacked.

I also strongly recommend adding Google Authenticator so you have 2-step authentication for your site.  It reduces the chances of someone hacking your password from the web interface.   This is not critical to functionality or security but I do recommend it.

So that is how I would get started.  I’ve not recommended specific themes or WooCommerce extensions because they change frequently and there may be something that better suits your particular needs.

Good luck and happy blogging!

Posted on

Diagnosing “savemail: cannot save rejected email anywhere”

We recently ran into this message on one of our development servers.   There are a number of reasons this may happen and finding the right solution means finding the cause of the error.  These steps will help you isolate the cause of the error so you can start tracking down the proper solution.  In our case an errant application was not sending the from: field in the mail header thus causing the message to fail the basic mail format checks.

Checking Aliases

First make sure you have the following entries in /etc/aliases:

# Basic system aliases -- these MUST be present
MAILER-DAEMON:    postmaster
postmaster:    root

If these entries are present, try running these commands:

# sendmail -bv MAILER-DAEMON
# sendmail -bv postmaster

It should come back immediately with a message like the one below: deliverable: mailer relay, host [], user

If it does not, rebuild the aliases database by running the newaliases command:

# newaliases

Forcing A Resend With Logging

Failed messages remain in the mail queue directory for examination by the system administrator. Sendmail renames the header of the queued message from qf* to Qf*, making it easy to identify these messages in your mail queue.  You can easily list the failed messages with the following mailq command:

# mailq -qL

To diagnose, locate the offending message ID in the log (/var/log/maillog) or by using the mailq -qL command.

Rename the matching Qf<message_id> file to qf<message_id>, and execute the following command:

sendmail -v -qI<message_id> -d11

The Problem Revealed

You should now have a detailed log file indicating what the source of the problem was.  In our case we see the From: line in the mail header is blank:

>>> MAIL From:<>
501 Syntax error in arguments
Data format error

Hope that helps. Good luck!

Posted on

web.config Inheritance in IIS


A couple of notes on IIS and how it works for virtual directories/applications and web.config inheritance and ASP.Net.


There is a configuration file that will automatically be inherited by an application’s web.config. This configuration file is machine.config and it defines the servers/computers schema for all of its web applications.

The root web.config files is also a server configuration file. This file resides in the same directory as the machine.config and is used to define system wide configurations.

Then you have the web site specific configuration also named web.config. From the websites root directory the web.config seems to work similar to .htaccess files.

Each directory in an may have its very own web.config file. Each virtual directory may also have its own web.config. Each virtual application has its own web.config file. Each one of these files inherit their parents web.config. This is done so you can define roles in the parent web.config file and it is enforced throughout the website.

Okay a virtual directory is the windows why of performing a soft link. It is not reflected in the file system. It is only reflected in IIS. An example:

Website = c:/intetpub/wwwroot/mysite/

Files = c/users/public/documents/

In IIS you can set a virtual directory by stating c:/inetpub/wwwroot/mysite/sharefiles/ that points to c:/users/public/documents/

You can actually add a virtual folder from another server on your network.

This is not reflected in the file system. If c:/inetpub/wwwroot/mysite/sharefiles/ directory was actually added, IIS will ignore it and point to the virtual directory. This was discovered when installing reporting for MS SQL that by default adds a ~/report virtual application. One of my applications already had an ~/report directory already and the virtual application took precedence. Applications work essentially the same as folders except in an virtual application operates in their own application pool.

If you want to stop inheritance you can the following to the site’s web.config:


If you want to not inherit certain sections of the configuration then you add a tag the child section.

Posted on

Restaurant Apps with The Table Tap

Cyber Sprocket Labs started a fun and exciting relationship that is the perfect cross section of two of our favorite things, programming and beer.    While we’ve worked on many exciting projects over the years, this is definitely in the top two.  None of our esteemed programmers ever thought they’d be working on an application that facilitates the delivery of beer directly to the consumer.  Yet, that is exactly what they are doing.

The Table Tap provides a unique and innovate new service to bars and restaurants anywhere in the world.   This service puts full service beer taps within the consumer’s reach, literally.    A computer controlled interface with up to 4 beer taps is installed directly in the table.    A quick swipe of an RFID card activates the taps and allows the customer to pour their own beer, as much or as little as they’d like.

Cyber Sprocket has been instrumental in the helping Jeff Libby bring his concept to the next level.  By providing technical support both during and after the installation he has been able to speed up his deployment cycle, increasing revenue.   We have also provided extensive programming services to update the business manager, hostess, and system administrator interfaces.    During our first few months working on the project we’ve also been tasked with several new additions to the software, the newest of which is enabling direct table-to-table chat using the system’s built in color LCD displays.

Like we said, a very fun and exciting project that has taken our technology services to places we never expected.   Thanks Jeff, we look forward to a long and prosperous relationship as your one-of-a-kind solution takes off!

Technical Overview

Services Provided

  • Web Application Programming
  • Database Design
  • Database Maintenance
  • Network Support
  • Device Interface Programming
  • System Configuration and Installation
  • Technical Support
  • Customer Support

Platform Details

Posted on

cPanel Brute Force Protection – regaining access

cPanel comes with a great feature called brute force protection.  The problem is, if you mis-type your password 5x in a row or if you have multiple people in the office, like we do, that try to get into various services and they combine to have 5 missed passwords in a row (ssh, mail, and whm logins all quality) then you will lock yourself out of your system.   Here are some tips & tricks that will help you regain access.

Gaining Initial Access

The easiest and quite possibly ONLY way to get back into your system is by logging in from a different IP address.  Sometimes you can do this by re-initializing your modem/router if you are on a DHCP assigned address from your ISP.  This is usually the case for residential service from DSL companies like AT&T (no other choices, huh?  we feel sorry for your), Comcast, or Roadrunner.   If you are on a business class like and have static IP addresses assigned then your public-facing IP won’t change.   You can try to do a One-To-One NAT to give yourself a different static IP, but that assumes you have more than one and you have one that is not being used.   You can try tethering your phone, assuming you have a smart phone.  You can also try to hop on a neighbors open wifi network if you have wireless.   You can also drag your laptop to the local Starbucks and try from there.  If you are wired you have a lot less choices, either call your IPP (hosting company) and ask them to reset brute force protection OR call your ISP and have them assign you a new static IP (if you have only one, chances are you don’t have servers mapped).

To summarize : get on a different network!

  • Try resetting your modem/router if you are on a DHCP address from your ISP.
  • Assign your PC a different static IP if you have a static IP group.
  • Tether your phone.
  • Jump on the neighbor’s network. (ask first)
  • Bring your laptop to a public WiFi hot spot.
  • Call your ISP for a new IP.
  • Call your IPP and ask them to reset cPanel brute force.

Cleaning Out Specific Blocked Entries

If you can gain SSH access you can clean out the errant entries in the cphulkd database that drives brute force protection, entering your IP address in the where clause to find and remove your blocked IP:

# mysql
mysql> use cphulkd;
mysql> select * from brutes where ip LIKE '%<your-ip-or-start-of-ip-address>%';
check the returned list to make sure what you think is your blocked IP is actually on the list.
mysql> delete from brutes where ip LIKE '%<your-ip-or-start-of-ip-address>%';
mysql> quit

Restarting cpHulkd

After making any changes make sure you restart cpHulkd:

# /usr/local/cpanel/etc/init/stopcphulkd
# /usr/local/cpanel/etc/init/startcphulkd

Cleaning Out ALL Blocked Entries

This will reset the “good guys” and the “bad guys”, but if you need a quick fix, don’t want to disable brute force protection, and aren’t comfortable with MySQL command line then go to the  brute force protection interface in cPanel and click the “flush db” button.

Make Sure You Don’t Get Blocked Again

Login to cPanel and go to the brute force protection interface.  Look for the trusted IP list link.  Add your IPs to that list.

Also Running APF?

You will need to stop the apf process from running:

# service apf stop

Then add your good IPs to the whitelist:

# vi /etc/apf/allow_hosts.rules
Posted on

Setting Up Raid 1 On Ubuntu 10.04

The following has been distilled from and revised to match our operating process.


Follow the installation steps until you get to the Partition disks step, then:

  1. Select Manual as the partition method.
  2. Select the first hard drive, and agree to “Create a new empty partition table on this device?”.
    • Repeat this step for the second drive.
  3. Select the “FREE SPACE” on the first drive then select “Create a new partition”.
  4. Next, select the Size of the partition. This partition will be the swap partition, and a general rule for swap size is twice that of RAM. Enter the partition size, then choose Primary, then Beginning.
  5. Select the “Use as:” line at the top. By default this is “Ext4 journaling file system”, change that to “physical volume for RAID” then “Done setting up partition”.
  6. For the / partition once again select “Free Space” on the first drive then “Create a new partition”.
  7. Use the rest of the free space on the drive and choose Continue, then Primary.
  8. As with the swap partition, select the “Use as:” line at the top, changing it to “physical volume for RAID”. Also select the “Bootable flag:” line to change the value to “on”. Then choose “Done setting up partition”.

RAID Configuration

With the partitions setup the arrays are ready to be configured:

  1. Back in the main “Partition Disks” page, select “Configure Software RAID” at the top.
  2. Select “yes” to write the changes to disk.
  3. Choose “Create MD device”.
  4. Select “RAID1”
  5. Enter the number of active devices “2”, or the amount of hard drives you have, for the array. Then select “Continue”.
  6. Next, enter the number of spare devices “0” by default, then choose “Continue”.
    • Choose which partitions to use. Generally they will be sda1, sdb1
    • For the swap partition choose sda1 and sdb1. Select “Continue” to go to the next step.
  7. Repeat steps three through seven for the / partition choosing sda2 and sdb2.
  8. Once done select “Finish”.


There should now be a list of hard drives and RAID devices. The next step is to format and set the mount point for the RAID devices. Treat the RAID device as a local hard drive, format and mount accordingly.

  1. Select “#1” under the “RAID1 device #0” partition.
  2. Choose “Use as:”. Then select “swap area”, then “Done setting up partition”.
  3. Next, select “#1” under the “RAID1 device #1” partition.
  4. Choose “Use as:”. Then select “Ext4 journaling file system”.
  5. Then select the “Mount point” and choose “/ – the root file system”. Change any of the other options as appropriate, then select “Done setting up partition”.
  6. Finally, select “Finish partitioning and write changes to disk”.
  7. The installer will then ask if you would like to boot in a degraded state, select Yes.