In order to provide faster and more secure connections to the Store Locator Web service we have added https support through Sucuri. Adding https will allow us to take advantage of SPDY and HTTP2 which are the latest improvements to web connection technology. There are many reasons to get your servers onto full https support. As we learned it isn’t a one-click operation, but without too much additional effort you can get your servers running on Amazon Linux with a secured connection. Here are the cheat sheet notes based on our experience.
EC2 Server Rules
With EC2 you will want to make sure you set your security group rules to allow incoming connections on port 443. By default no ports are open, you already added port 80 for web support. Make sure you go back and add port 443 as an open inbound rule.
Apache SSL Support
Next you need to configure the Apache web server to handle SSL connections. The easiest way to get started is to install the mod_ssl library which will create the necessary ssl.conf file in /etc/httpd/conf.d/ssl.conf and turn on the port 443 listener.
This is more of a challenge if you don’t know where to start. Part of the issue is Amazon Linux runs Python 2.6 and Let’s Enrypt likes Python 2.7. Luckily there has been progress on getting this working so you can cheat a bit.
You may get some warnings and other messages but eventually you will get an ANSI-mode dialogue screen (welcome to 1985) that walks you through accepting terms and the certification. Answer the questions and accept your way to a new cert.
Your certs will be placed in /etc/letsencrypt/live/ , remember this path as you will need it later.
Go to the /etc/httpd/conf.d directory and edit the ssl.conf file.
Look for these 3 directives and change them to point to the cert.pem, privkey.pem, and chain.pem file.
Here is a quick shortcut I used to combine a series of comma separated values into a single list of unique entries. In my case I was trying to get a unique list of tags that came from several different lists of tags. If list A had “apples, oranges, bananas” and list b had “apples,grapes,watermelons” I wanted to get “apples, bananas,grapes,oranges,watermelons” back.
There is the shortcut I used:
Paste each comma-separated list into a file named “x”, separate lines are OK.
Run this Linux command on the file to create a file named “Y” that has my sorted unique list of tags:
# tr ‘,’ ‘\n’ < x | sort -u | tr ‘\n’ ‘,’ > y
This is a quick and efficient way to sort comma-separated lists on Linux, which likely includes OS/X as well.
Essentially the Virtual Machines offering on Windows Azure equates to a virtual dedicated server that you would employ from most hosting companies. The only different with the Windows Azure platform, like most cloud-based offerings, is that you need to serve as your own system admin. This is not web hosting for business owners but for tech geeks. In other words, it works perfect for guys like me.
Or so I thought.
Different Shades of White
As I learned tonight, there are differences between the various cloud offerings that are not easy to tease out of the hundreds of pages of online documentation touting how awesome a service provider’s cloud services are. Sure, there are the metrics. You can compare instance sizes in terms of disk space, CPU, and bandwidth. You can comparing pricing and the relative costs of operating your server on each of the cloud platforms. You can even get the background information on the company providing the virtualized environment, getting some clue (though never a clear picture) of where the servers are physically located, how many servers they have, how secure the environment is, and more.
At the end of the day they all look very similar. Sure there are discrete elements you can point to on each comparison spreadsheet you throw together, but in the end the differences are relatively minor. They pricing is similar. The network and server room build-outs are similar. The support offerings look similar. When all is said-and-done you end up making a choice based on price, the reputation of the company, the quality of the online documentation, and the overall user interface experience (UX) that is presented during your research.
After a lot of research, and with quite a bit of experience with Amazon Web Services, all the cloud based offerings were very similar. Different shades of white. In the end I decided to try the Microsoft Windows Azure offering. Microsoft has a good reputation in the tech world, they are not going anywhere, and as a Microsoft Bizspark member I also have preview access and discount services.
My decision to go against the recommendations I’ve been making to my clients for years, “Amazon was one of the first, constantly innovates, and is the leader in the space”, was flawed. Yes, I tested and evaluated the options for months before making the move. But it takes an unusual event to truly test the mettle of any service provider.
Breaking A Server
After following the advice of a Microsoft employee that was presented in a Windows Azure forum about Linux servers, I managed to reset the Windows Azure Linux Agent (or WALinuxAgent) application. No, I did not do this on a whim. I needed to install a GUI application on the server and followed the instructions presented. It turns out that Microsoft has deployed a custom application that allows their Azure management interface to “talk” to the Linux server. That same application DISABLES the basic NetworkManager package on CentOS. To install any kind of GUI applications or interface you must disable WALinuxAgent, enable NetworkManager, install, disable NetworkManager, then re-enable WALinuxAgent. The only problem with the instructions that are published in several places is they omit a very important step. While connected with elevated privileges (sudo or su) you must DISABLE the WALinuxAgent (waagent) provisioning so that it does not employ the Windows Azure proprietary security model on top of your installation. If you do not do this and you log out of that elevated privs session y ou will NEVER have access to an elevated privs account again.
Needless to say, you cannot keep an enterprise level server running in this state. Eventually you need to install updates and patches for security or other reasons.
As I would learn, there is ZERO support on recovering from this situation.
Support versus support
In the years of working with Amazon Web Services and hosting a number of cloud deployments on their platform, I had come accustomed to being able to gain access to support personnel that actually TRY to help you out. They often go above-and-beyond what is required by contract and try to either get you back on track through their own efforts of at least provide you with enough research and information that you can recover from any issues you have with limited effort. Amazon support services can be pricey, but having access to not just the level one but also higher level techs is an invaluable resource.
The bottom line is that Microsoft offers NO support services for their Linux images, even those they provide as “sanctioned images”, beyond making sure the ORIGINAL image is stable and that the virtual machine did not crash. Not only do they not have any apparent means to elevate support tickets, as it turns out there is NO SUPPORT if you are running a Linux image.
Clearly Microsoft does not put this “front and center” on ANY of their Windows Azure literature. In fact, just the opposite. Microsoft has made an extended effort in all their “before the purchase” propaganda to try and make it sound like they EMBRACE Linux. They go out of their way to make you feel like Linux is a welcome member of their family and that they work closely with multiple vendors to ensure a top-quality experience.
Until you have a problem. At which point they wash their hands, as is evident in this support response along with a link to the Knowledgebase article saying “Linux. Not our problem.”:
Hello Lance, I understand your concerns and frustration, but Microsoft does not offer technical support for CentOS or any other Linux OS at this time.
Please, review guidelines for the Linux support on Windows Azure Virtual Machines: http://support.microsoft.com/kb/2805216
While the lack of support and the inability to regain privileged user access to my server is the primary concern that has me on the path of choosing a new hosting provider, there have been other issues as well.
A few times in the past several months the WordPress application has put Apache in a tailspin. This consumes the memory on the server. While that is not necessarily an issue with Windows Azure, the fact that the “restart virtual image” process DOES NOT WORK at least 50% of the time IS a big issue. Windows Azure is apparently overly-reliant on that dreaded WALinuxAgent on the server. If it does not response, because memory is over-allocated for example, the server will not reboot. The only thing you can do is press the restart button, wait 15 minutes to see if it happened to get enough memory to catch the restart command, and try again. Ouch.
The Azure interface is also not as nice as I first thought. While better than the original UX at Amazon Web Services, it is overly simplistic in some places and downright confusing in others. Try looking at your bill. Or your subscription status. You end up jumping between seemingly dis-jointed sites. Forget about online support forums. Somehow you end up in the MSDN network, far removed from your cloud portal. I often find myself with a dozen windows open so I can keep track of where I was or what I need to reference, lest I lose my original navigation path and have to start over. Not too mention the number of times that this site-to-site hand-off fails and your login is suddenly deemed “invalid” mid-session.
So once again, I find myself looking for a new hosting provider. Luckily I recently made the move to Windows Azure and not only have VaultPress available to make it easy to relocate the WordPress site but also Crash Plan Pro to get all the “auxiliary” installation “cruft” moved along with it.
Where will I go?
In my mind there are only two choices for an expandable cloud deployment running Linux boxes. Amazon Web Services or Rackspace. I’ll likely end up with Amazon again, but who knows… maybe it is time to try the legendary support at Rackspace once again. We’ll see. Stay tuned.
I recently needed to clean up a directory on my Linux box that included hundreds of files. I wanted to get rid of all the files that hadn’t been updated in over a year. At first I decided just to list the files by date:
This will list the files in long format by time (newest files list before old file). This shows me all the details with the oldest files scrolling to the bottom of the window so the last few files above my command prompt are the oldest.
There are hundreds of files more than a year old.
Find is one of the tools I keep in my Linux tool belt. I don’t need it often, but when I do it saves me quite a bit of time. Find is the Swiss Army Knife of Linux search tools. It is complete, thorough, and comes with just about every “doo-dad” (a technical term) for finding files. It does real-time system searches, so unlike locate it does not rely on a secondary database which may become outdated and not give complete results.
The downside of find is that there are so many options. It is easy to choose the wrong option or, more likely, to string together the options in a manner that the search takes forever and you get no results.
The upside, thanks to how the command shells work, is that you can use the output of find to drive other applications. Like ls or rm. The later two are how we’ll employ find.
Find Files Not Touched In A Year
First we can find all the files in our current directory that are ‘stale’ like this:
find ./ ctime +365
In English “find stuff in this directory (./) where the creation time (ctime) is at least 365 days ago”.
The sister option is mtime, which is “modification time”, and may be more appropriate depending on whether you are truly looking for “modified since” (touched at all) or “created since” (date it was first brought into existence).
Now we can combine this with ls to list the results. It may seem redundant, but I like to test the parameter passing of find to another shell command using something innocuous such as ls. So we test like this:
ls -l `find ./ ctime +365`
The back-ticks take the output of find, which is a simple relative-path based list of the files it located, and uses that as the second parameter to ls.
If all looks good we can now force a remove of those files. Be careful with rm -f. You can do irreparable harm with this. There are other options and if you are not comfortable with power tools that can take a limb off with one keystroke, then drop the -f or us one of the myriad of linux admin tools to help you out. I’ll roll the dice and hope all my limbs remain intact:
rm -f `find ./ ctime +365`
Other Find Options
There are a lot of ways to find files by other attributes such as “delete all files larger than ? MB” or “delete all files older than <this file>”. This is a good resource that explains some of the options and how to perform different types of find operations:
I’ve recently found something relatively interesting that you can do in a bash terminal. I recently sent out an email talking about how to implement git completion’s wonderful self to work on macs.
Part of that endeavor meant diving into the way that the terminal displays its information to you on your prompt. Some of the things I found out were using the escape codes like \h to stand for host, \W for working directory w/o the path, etc.
So I set out to find out what some more of those escape characters were, and I found: \!
I’ve learned from Paul that doing a !! will repeat the last command that you put in. This \! will actually list a sequential number (to the last) on your prompt. So now when I’ve added it to my PS1 as before from the git completion tutorial, my prompt now displays:
(527)iMac:~ chase$ _
And when I put in a command, lets say I emptily type grep<enter>
Lets pretend that was some crucially complex command (you know the kind… that escapes you how to do it again later when you really need it) instead of an empty grep, and lets say that through the course of working I’ve since entered dozens or hundreds of other commands into the prompt. I have a few options available:
hit the up arrow repeatedly until I find the command (which it doesn’t list with the number next to it)
use the <ctrl>+R command and type in parts of the command I remember
grep the history
lots of things
or, if I’ve remembered that 527 was the line for that crucial command, I can simply type:
(8901)iMac:~ chase$ !527<enter>
And it will repeat the command from that line. The only downside to this, is that eventually if you come to rely on it for remembering several different sets of complex commands… you’ll have to end up remembering several different sets of numbers that corresponds to those lines. Also, this function doesn’t give you any type of “Are you sure?” type of moment to let you know what you’re about to do… so one transposed number or dropped digit could potentially mean catastrophe if you’ve ever run some iffy commands (rm -Rf) .
About This Article…
I pilfered this from “The List”, thanks Chase…
RAID arrays are an important part of any mission critical enterprise architecture. When we talk RAID here we are talking mirrored RAID, or mirrored and striped RAID, not simply striping which gives you a larger drive from several smaller drives. While that may be great for some home or desktop applications, for a enterprise application that simply doubles your changes of a failed system.
We often spec out RAID 1 or higher mirrored systems with RAID 1+0 being the most common (mirrored and striped) so that you increase access performance AND keep the system up if a single drive fails (on a 3 drive RAID 1+0 configuration). Along the way we’ve learned some tips & tricks that may help you out. To start with we’ll post some info on Linux RAID and eventually expand this article to include Windows information.
Fake v. Real Raid
One thing we’ve learned recently is that in the flood of new low cost servers there has also been a flood of those servers coming with on board RAID controllers. Unfortunately these new RAID controllers use a low cost solution that basically pretends to be a RAID controller by modifying the BIOS software. In essence they are software RAID controllers posing a hardware RAID controllers. This means you have all of the BAD features of both systems.
One easy way to tell if you have a server with “fake raid” is to configure the drives in RAID mode from the BIOS. Then boot and install Linux. If the Linux install sees both drives versus a single drive then the “on board RAID” is a poser. Skip it. Configure the BIOS in standard drive mode & use the software RAID.
Most current Linux distros have RAID setup and configuration built into the setup and installation process. We’ll leave the details to other web articles.
MDADM – Linux RAID Utility
mdadm is the Linux utility used to manage and monitor RAID arrays. After configuration a pair of drives, typically denoted with sda0, sdb0 etc. show up in your standard Linux command as md0. They are “paired up” to make up the single RAID drive that most of your applications care about.
mdadm is how you look “inside” the single RAID array and see what is going on. Here is an example of a simple “show me the status” command on the RAID array. In this case we have a failed secondary drive in a 2-disk RAID1 array:
We had to do this on our drive because we forgot to partition it into a boot and data (/ and /boot and /dev/shm) partition. Thus the /dev/sdb instead of /dev/sdb1, etc. as it the norm for a partitioned drive.
To properly re-add a drive to an array you will need to set the partitions correctly. You do this with fdisk. First, look at the partitions on the valid drive then copy that to the new drive that is to replace the failed drive.
[root@dev:~]# fdisk /dev/sda
The number of cylinders for this disk is set to 60801.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 140 1020127+ fd Linux raid autodetect
/dev/sda3 141 741 4827532+ 8e Linux LVM
/dev/sda4 742 60801 482431950 5 Extended
/dev/sda5 742 60801 482431918+ fd Linux raid autodetect
[root@dev:~]# fdisk /dev/sda
Use "n" to create the new partitions, and "t" to set the type to match above.
That should get you started. Google & Linux man commands are your friend. As we have time we’ll publish more Linux RAID tricks here.
We found a system administration problem on a server today that was being caused by incorrect directory permissions. Any email that passes through the server-wide spam filter was not going through because of permissions on the /home/<domaindir-here>/etc directory. That directory needs to be owned by mail.
Here is a quick way to update those directories:
[root@host:home]# cd /home
The find command only lists directories (much, much faster if you know you only need a certain file type like ‘d’), up to 2 levels deep (. = current directory = level 1), and matching the name etc…
[root@host:home]# chgrp mail `find /home -maxdepth 2 -type d -name etc`
Now we pass find as a variable list to the ls command to see what we touched. The ‘d’ on ls also restricts it to directory level output only, so we don’t descend into those directories and list the contents.
[root@host:home]# ls -ld `find /home -maxdepth 2 -type d -name etc`
drwxr-x--- 3 aaron mail 4096 Feb 10 2008 /home/aaron/etc
drwxr-x--- 2 abundatr mail 4096 Oct 20 2009 /home/abundatr/etc
drwxr-x--- 3 alutask mail 4096 Feb 10 2008 /home/alutask/etc
drwxr-x--- 3 banks mail 4096 Feb 21 2008 /home/banks/etc
drwxr-x--- 4 chasvol mail 4096 Feb 10 2008 /home/chasvol/etc
drwxr-xr-x 3 cyberspr mail 4096 May 7 11:24 /home/cyberspr/etc
drwxr-x--- 2 daedalus mail 4096 Mar 27 2008 /home/daedalus/etc
drwxr-x--- 7 dolphin mail 4096 Jul 30 2008 /home/dolphin/etc
drwxr-x--- 3 dutchbul mail 4096 Feb 10 2008 /home/dutchbul/etc
drwxr-xr-x 2 eatchas mail 4096 May 10 21:59 /home/eatchas/etc
drwxr-xr-x 2 fireant mail 4096 May 25 21:16 /home/fireant/etc
drwxr-xr-x 4 jrsint mail 4096 Jan 11 2008 /home/jrsint/etc
drwxr-x--- 3 lance mail 4096 Jul 9 2007 /home/lance/etc
drwxr-xr-x 2 memoryve mail 4096 Feb 16 10:29 /home/memoryve/etc
drwxr-x--- 2 michaelc mail 4096 May 13 2008 /home/michaelc/etc
drwxr-x--- 3 modelloc mail 4096 Dec 18 19:22 /home/modelloc/etc
drwxr-x--- 3 monstrss mail 4096 Feb 10 2008 /home/monstrss/etc
drwxr-x--- 3 nicolas mail 4096 Feb 10 2008 /home/nicolas/etc
drwxr-x--- 3 outdoor mail 4096 Aug 26 2008 /home/outdoor/etc
drwxr-xr-x 2 perks mail 4096 Jun 6 15:17 /home/perks/etc
drwxr-x--- 2 pout mail 4096 Jun 15 12:08 /home/pout/etc
drwxr-x--- 3 ravenel mail 4096 Aug 12 2007 /home/ravenel/etc
drwxr-x--- 4 remodel mail 4096 Feb 10 2008 /home/remodel/etc
drwxr-x--- 2 saveag mail 4096 Oct 9 2008 /home/saveag/etc
drwxr-xr-x 2 shoppout mail 4096 Jun 15 16:46 /home/shoppout/etc
drwxr-x--- 3 southern mail 4096 Feb 10 2008 /home/southern/etc
drwxr-x--- 2 tbcustom mail 4096 Jun 20 2008 /home/tbcustom/etc
drwxr-x--- 3 thebicyc mail 4096 Jun 16 2008 /home/thebicyc/etc
drwxr-xr-x 3 theenerg mail 4096 Feb 9 2008 /home/theenerg/etc
drwxr-x--- 2 unclelue mail 4096 Dec 14 2009 /home/unclelue/etc
drwxr-x--- 2 vanjean mail 4096 Feb 16 2009 /home/vanjean/etc
drwxr-x--- 3 wwwbrea mail 4096 Dec 18 01:22 /home/wwwbrea/etc
This same technique can be used for any number of commands when you need to work on directories. Just be careful with it, this can wreak as much havoc as it can repair damage done by other command line tools that have been wielded without care.
This Red Rider BB Gun is loaded. Be careful out there! “You’ll shoot your eye out kid”…
I finally got tired at looking at the thousand-plus line daily reports coming to my inbox from Logwatch every evening. Don’t get me wrong, I love logwatch. It helps me keep an eye on my servers without having to scrutinize every log file. If you aren’t using logwatch on your Linux boxes I strongly suggest you look into it and turn on this very valuable service. Most Linux distros come with this pre-installed.
The problem is that on CentOS the version of logwatch that comes with the system was last updated in 2006. The logwatch project itself, however, was updated just a few months ago. As of this writing the version running on CentOS 5 is 7.3 (released 03/24/06) and the version on the logwatch SourceForge site is 7.3.6 (updated March 2010). In this latest version there are a log of nice updates to the scripts that monitor your log files for you.
The one I’m after, consolidating brute force hacking attempt reports, is a BIG thing. We see thousands of entries in our daily log files from China hackers trying to get into our servers. This is typical of most servers these days, however in many cases ignorance is bliss. Many site owners and IPPs don’t have logging turned on because they get sick of all the reports of hacking attempts. Luckily we block these attempts on our server, but our Fireant labs project is configured to have iptables tell us whenever an attempt is blocked at the kernel level (we like to monitor what our labs scripts are doing while they are still in alpha testing). This creates THOUSANDS of lines of output in our daily email. Logwatch 7.3.6 helps mitigate this.
Logwatch 7.3.6 has a lot of new reports that default to “summary mode”. You see a single line entry for each notable event, v. a line for each time the event occured. For instance we see a report more like this for IMAPD..
So as you can imagine, with 10 sections to our logwatch report, the new summary reports make our email a LOT easier to scan for potential problems in our log files.
In order to get these cool new features you need to spend 10 minutes, 5 if you’re good with command line Linux, and install the latest version of logwatch. In essence you are downloading a tarzip that is full of new shell and Perl script files. The install does not compile anything, it simply copies scripts files to the proper directory on your server.
Our example here are all based on the default CentOS 5 paths.
Go to a temp install or source directory on your server.
That’s it. You should now be on the latest version of logwatch.
You can tweak a lot of the settings by editing the files in /etc/log.d/default.conf/services/<service-name>, for example we ask logwatch to only tell us when someones attempt to connect to our server has been dropped more than 10 times by our Fireant scripts (we do this via the iptables service setting).
Hope you find this latest update useful. We certainly did!
There have been multiple situations where I find out that I need a particular file to continue with something I am doing. Most of the time this happens when I am compiling a program. I will be missing a library, or header file, or something. So I end up on search engines looking for whatever package I need to ‘apt-get install’. Well it turns out there is a command line tool that will tell you this information, on systems use Apt, that is.
I use Ubuntu, and it doesn’t come with that platform by default. Or at least not on 10.04 then I’m using. But you should know how to get it. A simple ‘apt-get install apt-file’.
Once you have it installed, you will have to update the cache it uses for searching. I was prompted to do this automatically, but if you are not then you can run ‘apt-file update’ to do so.
With that done, the command ‘apt-file find’ will let you list packages that include the given file. For example, I was looking for the program ‘xpidl’, which I didn’t have. Easy to find:
You can provide the argument ‘-x’ to use a Perl regular expression as your search query.
You can also see what files are in a package by using the command ‘list’ instead of ‘find’. Unlike the ‘dpkg -L’ command, ‘apt-file list’ will work even if you don’t have the package installed or cached on your system.
I’ve been recently working with AWS EC2 instances and have found that the SSH keys that they require for secure login practices actually have some nice benefits. For one thing, once I’ve generated a keyfile that uniquely identifies me on my local PC, I can use that keyfile to quickly and easily login to any server without having to remember passwords and login credentials. Having to get in and out of over a dozen different servers every week, and nearly 100 different servers over the course of a year, the use of key sharing certainly has the potential to save a lot of keystrokes.
In a nutshell, here is the pieces that make it work:
Create a unique fingerprint on your local machine
Initalize the SSH environment for your user login on the remote environment
Store that fingerprint in the SSH environment on the remote system
Once you have completed these steps, you will be able to login by simply typing in your username on the remote system. Your SSH compatible terminal program (I use PuTTY these days) will swap credentials with the server using your digital fingerprint in place of typing in a password.
The more detailed steps listed here assume a Redhat distribution of Linux and use of PuTTY and Puttygen on a Windows box (I’m using Vista at the moment).
Get Puttygen and generate a new key:
Run the Puttygen program
When finished, click Save private key. I like to save the file with a -priv.ppk ending on the file name (Puttygen will not create an extension by default).
In case you need one later, it is a good idea to save a public key as well.
Highlight the key text in they key box and copy it (Ctrl-C or Right-Click Copy).
Login to the remote system with your login credentials.
Check for a .ssh directory with an id_rsa.pub and id_rsa file within, if they are missing you’ll need to create an RSA fingerprint on the server* for handshaking as noted here:
Enter the following command and the command prompt: ssh-keygen -t rsa
Accept the defaults for the prompts. *note: while you don’t necessarily need to generate the key, this will normally create the .ssh directory where you’ll need to put your authorized_keys file later.
Enter the key in the authorized_key file on the remote server:
Open the authorized_keys file within the .ssh directory with your favorite editor (I prefer vi, some more skilled professionals will lean toward emacs or even vims). If this is the first key you are putting online you may need to create this file.
Paste in the key you copied from Puttygen as a single line within the file and save the file.
Make sure the file has limited write access, such as rw-r–r– (chmod 500 authorized_keys).
Now you can start Putty and configure your session.
Enter the host name.
Under the Category:Connection:SSH:Auth setting you should browse to the Private Key file you saved in step 1.3 above.
Go back to the main Category:Session window and Save the session so you don’t have to do this every time.
Now you can connect to the remote server by loading the session, clicking open, and simply typing your username.
Logging Into AWS EC2 Instances With Putty
The process is basically the same as above, however the PEM key that Amazon provides is not PuTTY compatible. Luckily the PuTTY Key Generator, PuTTYGen, will solve the problem for you.
Change to All File (*.*)
Select the PEM key you downloaded from Amazon
Save private key (ignore the no passphrase warning)
You can use the newly converted & saved .ppk key file with PuTTY to access your AWS EC2 Instance.
If you are using the default security group, make sure you open up port 22, preferably for your client IP address with a bitmask of /32. Otherwise every hacker in the world will be trying to brute force your system.