Posts Tagged ‘security’

A quick overview of PuTTY and SSH for AWS Newbies

May 17th, 2009 10 comments

Linux Access with SSH & PuTTY

This post will (attempt) to explain what SSH and PuTTY are so that as a user you understand the terminology of AWS and so that you can be productive in the environment. This post will not attempt to make you an expert in SSH. For best practices in implementing SSH, I strongly recommend a book dedicated to hardening *nix (Linux, Unix, Solaris, etc).


In the early days, not that long ago really, of networking, very simple tools were used to work with remote computers: telnet as a console, ftp for file copying, rsh for remote command execution and others. These were easy to configure and use tools. They were client server in that a software component needed to run on both the local machine (client) and the remote machine (server).

While easy to use, they were very insecure. They made no pretense at verifying that the calling host really was the calling host. Everything was username/password based and both the username and the password were passed around the network in cleartext. If you intercepted the little data packages that were being routed around the network (with a sniffer for example), you would be able to extract the login credentials. Even if you encrypted all of your data, your credentials were still in the clear.

SSH is an attempt (quite successful) to fix those insecurities without making things anymore complex than they need to be. SSH stands for Secure SHell. However, SSH is not really a command shell, it is rather a protocol that encrypts communications. That means that programs that use SSH can work like telnet or ftp but will be more secure.

Note: Technically, SSH is also a tool. There is a client terminal program called SSH. It’s a non-graphical command line tool that provides a window which executes a command shell on the remote system.

SSH offers multiple modes of connecting but for the purposes of AWS, we will talk about key based access. To make things more secure, EC2 uses a key based authentication. Before starting an instance, you need to create a key pair.

Note: The below explanation of SSH is a gross over simplification. I am just trying to give you a feel for what is going on. If you really want to understand the technical details, I really do recommend that you purchase a book. My personal recommendation is SSH, The Secure Shell: The Definitive Guide from O’Reilly.

When an instance starts up for the first time, EC2 copies the ssh key that you created to the proper directory on the remote server. The remote server will be running the SSH Server software.

You will then use an SSH client to connect to the server. The client will ask for some information proving that the server really is who it says it is. The first time you connect to a server, the client won’t have that information available so it will prompt you to vertify that the server is legitimate.

You verify that information by comparing a thumbprint. Verifying a host is a bit beyond this book but do an internet search for for “ssh host thumbprint”. You’ll find a variety of articles explaining it in detail.

Once the client accepts the host, the client will send secret information to the host. This is your key data. If the host is able to make a match, it will authenticate you and let you login in. If the host then asks for a password, you key did not work and something is not configured properly. In my experience, it will probably be that your client key file is not in the place your client is expecting it to be.

What happens next depends on the tool you are using. If you are using a terminal program, ssh for example, you will now have a command prompt. If you are using sftp or scp, you will be able to copy files.

In addition to command line tools, there are GUI tools that use the SSH protocol. WinSCP is an excellent SCP client for Windows.

Regardless of the tools you use, SSH is busy encrypting everything you send over the wire. The SSH protocol has evolved over the years, and will probably evolve even more in the future, but it is currently running a very secure form of encryption.

If you are running Linux, you are pretty much finished at this point. SSH ships with every Linux distribution that I am aware of. If you are using Windows, however, you either need to install CyWin (a unix environment that runs in windows), or you’ll want to get PuTTY.


You can download all of the programs discussed in this section at:

I honestly have no idea why PuTTY is spelled PuTTY. I can figure the TTY part of it is from the Unix command that output a display. I’m not sure bout the Pu though.

I do know what PuTTY is though. PuTTY is a very simple implementation of an MS-Windows SSH terminal client. When I say it is simple, I mean that as a complement. This is a tool that does not get in the way.

You tell PuTTY to connect to a remote server and, as long as your keys are configured, it will connect you. If are not using keys, you can connect with passwords (if the host allows that). As a best practice, keys are recommends over passwords.

PuTTY is the terminal client but you can get a couple of other tools from the same author. PSFTP and PSCP offer secure file transfers. These tools are as easy to use as PuTTY and work pretty much the same way.

For command line syntax and configuration, take a look at the documentation at the link above.

A note about SSH keys and PuTTY, they are not compatible. This same web site offers a utility called PuTTYgen. When you create a key pair for EC2, you download that file to your local machine. PuTTYgen converts that file (a .pem file) to a private key file (a .ppk file).

PuTTY Key Generator

PuTTY Key Generator

The tool is named puttygen.exe. Run the executable and the above window pops up. To convert an amazon key to a PuTTY key, use the menu option Conversions ? Import Key. Load the .pem file that you downloaded and press the Save Private Key button.

It will warn you about leaving the passphrase blank. That’s ok.

Save the file to the location that PuTTY has been configured to look in for it’s keys.

Using and Managing AWS – Part 3: AWS Security

May 17th, 2009 1 comment

AWS Security

Data Center Security

Amazon is a well known entity and works to provide an extremely secure environment for your applications ans your data. Amazon is pursuing Sabanes-Oxley certification (by an external auditing agency) and SAS-70 Type II certification.

Amazon does not broadcast the locations of their data centers and physical security is a top concern for them. They have military grade external protections. Physical access to Amazon data centers controlled by a two-factor authentication and only those Amazon employees with an actual need are ever given access.

Hardware access is provided only to those administrators who directly require it and they must use their own SSH keys to access bastion hosts (kind of like cloud overseers). They can then escalate access to gain access to individual client hosts. All administrator access is logged and audited.

The network is monitored by Amazon security services. Due to Amazon IP security, an EC2 instance cannot spoof an IP address. An instance is not allowed to send traffic with a spoofed address. Also, Amazon monitors for port scanning. If they find port scanning, they block the incoming address.

Because all clients are running in virtual servers with virtual storage, there is no way for one client to gain access to another clients data or traffic. For all intents and purposes, each client is running in their own data center.

Data Security

Your data is secured when traveling over the wire by SSL. You can chose less secure methods once you have an image up and running but by default, an AMI will be very secure. If you choose to open your firewall (security group) to any and all traffic, you will be open to hacking. If you chose to use password security instead of SSH keys, you take your own risks.

There are several additional steps you can take to protect your data.

  • Only present web servers to the internet. You have the option of not having a public IP address on every instance. If you have amulti-tier application, you can choose to have a public IP address on your web server and have just an internal IP address on your database server. To access the database server, you would have to log into the web server and then ssh from there to the databaseserver.
  • Another option is to encrypt all of your stored data (or at least the sensitive portions of it). Amazon offers Linux, Windows and Sunvirtual machines and all of these operating systems offer very robust (at least via third party tools) encryption. A very good, freeoption on windows servers is TrueCrypt.

Data being stored in AWS applications (S3, SimpleDB and EBS) is automatically, redundantly stored in multiple physical locations. You do not pay for this additional storage. Amazon does this to ensure the integrity of your data (and that they meet their SLAs).

Yet another option is to use the encryption capabilities offered by the various databases that you might be using. Oracle provides Transparent Data Encryption for data at rest and offers Oracle Secure Backup via RMAN. Using Oracle Secure backup with the Cloud Module extension will allow you to encrypt your back ups and store your data on S3.


AWS allows two different methods of authentication. When you submit a request, be it to create a new instance in EC2 or to upload a file to S3, AWS needs to know that you are allowed to to submit the request that you are submitting.

AWS recognizes two different types of request identifiers: a secret key or an x.509 certificate. The x.509 certificate can only be used with SOAP transactions and can only be used with certain EC2 and SQS requests. The secret key method can be used with all of the services and for all of the request types. For that reason, I will assume you have chosen to use the secret key method and that is the method I will be using here on the blog.

Amazon allows you to regenerate your key any time you decide that you need to. Remember that your access keys are what you will use to access AWS via any third party software or external vendors (such as RightScale).

To get to the Access Identifiers, choose Your Account ? Access identifiers from the menu shown in Image 2 above. This screen will allow you to generate a new secret access key and a new x.509 security certificate.

Your access key does not change and is included on all requests to AWS. You can think of your access key as your username. Think of you secret key as your password. You can change your secret key at any time but your access key stays the same all the time.

Amazon notes on the page that you must protect your secret key and never email it to anyone. You will need to give it away under certain conditions though. When you use a third party tool like ElasticFox or Cloud Studio, you need to enter your credentials. You will also need to give your secret key to a third party vendor like RightScale who will issue requests on your behalf.

AWS Access identifiers

AWS Access identifiers

You can see your secret key by clicking the “+ Show” link. You will come to this screen to get your access keys. When entering the values into other tools, use cut and paste to do so. When you paste, paste it first into notepad or a VI session. For some reason, when cutting this data, it usually has several extra spaces at the end that will prevent it from working when you paste.

To generate a new key, just press the Generate button. Because this key is like a password, you may want to regenerate the key on aregular basis. If you are sharing this key at all, you will need to make sure you update anyone who has it.

Amazon Web Services EC2 – Part 6: Elastic Block Storage

April 8th, 2009 Comments off

Elastic Compute Cloud (EC2)

Elastic Block Storage (EBS)

For most of its life in beta, EC2 offered only two kinds of storage, AMI based transient storage and S3. The transient storage was mounted as a filesystem and S3 was used for backup. To save data during downtime for instances, data had to first be saved off to S3 and the instance brought down. When the instance was brought back up, data was restored from S3. It was a painful process.

Enter EBS, the Elastic Block Store. EBS is a persistent storage mechanism, like a hard drive, that can be mounted by an instance and will retain its data even when the instance is brought down.

Amazon estimates that EBS storage is more reliable than commodity hard drives with an annual failure rate of 0.1 – 0.5%. EBS is replicated (mirrored) within an availability zone for redundancy. You would need to lose the entire availability zone to lose your data.

An EBS volume can only be attached to a single instance at a time but like a USB drive, you can attach it to one instance, copy data to it and then attach it to another instance. An easy way to move large volumes of data.

An EC2 instance can attach many EBS volumes. An EBS volume can be allocated from 1GB to 1TB. If you need 10TB, mount 10 1TB volumes or 20 500GB volumes. You are limited to the max number of volumes you can use (20) but you can always request that number be increased should you have a business reason to do so.

Performance of an EBS volume is engineered to be better than the internal AMI volumes. It’s sort of like attaching to a very fast, very expensive SAN. Because they are raw devices, you can attach multiple volumes and stripe across all of them. This will improve IO.

An added durability feature is volume snapshots. You can tale point in time snapshots of your entire EBS configuration and the data will backed up to S3. Snapshots are incremental so only data that has changed is backed up. This saves time and money (less in S3 charges). Snapshot storage in S3 is stored compressed to take even less space.

You can also create new instances from a snapshot. If we refer back to that catalog application I mentioned earlier, you could add new catalog instances by creating a new instance and attaching to a copy of your master instance. S3 supports lazy loading so you can start the instance before all of the data is copied. If any data is requested before its restored, S3 will immediately serve that data up so that it looks to the file system as if it was already available.


EBS storage costs $0.10 per GB per month of allocated disk space. 10GB for a month costs $1.00, 100GB would be $10.00 per month. Very, very cheap storage for such a high performing and reliable storage system.

IO is billed at $0.10 per million IOs per month. Amazon provides an example of a medium sized web site that does 100 transactions per second. Adding that to a month works out to about $26.00 per month. Not bad.

Technorati : , , , , , , ,

Amazon Web Services S3 – Part 3: Costs and SLA

April 6th, 2009 Comments off

Simple Storage Service (S3)


Storage is cheaper in the US than in Europe. If you are based in Europe, you may want to decide which is more important when getting or adding data: price or latency.


US per GB

Europe per GB

First 50TB/Month



Next 50TB/Month



Next 400TB/Month



Over 500TB/Month



Table 3: S3 Storage Costs

Data Transfer

US per GB

Europe per GB

Transfer Into S3



First 10TB Out of S3



Next 40TB Out of S3



Next 100TB Out of S3



Out over 150TB



Table 4: S3 Data Transfer Costs


US per 10000 Requests

Europe per 10000 Request

Put, Copy, List, Post



Delete (always free)



Get and all other requests



Table 5: S3 Request Costs

These prices are accurate as of the time of writing them. As always, verify before making a decision.


Amazon warrants a 99.9% uptime on a monthly basis. This is a significant uptime percentage. If S3 is not able to meet the uptime guarantee, Amazon will credit your account for the month of service interruption. If the percentage of uptime is between 99% and 99.9%, you will get a 10% credit. If the uptime is less than 99%, the credit is 25%.

You can read the SLA in detail at

Technorati : , , , , ,

Amazon Web Services S3 – Part 2: Security

March 15th, 2009 Comments off

Simple Storage Service (S3)


Write and delete access to buckets and objects is controlled via Access Control Lists (ACL). You can assign read permissions to any object to specific users. You can also make an object public to grant access to anyone.

Transfer into and out of S3 can utilize SSH which will encrypt data. This prevents any “over the wire” interception of your data. Data at rest is not encrypted and Amazon recommends that users encrypt any sensitive data with their encryption tool of choice. You would encrypt your data before uploading to S3.

When you remove an object or bucket, public access (i.e. from the internet) is removed immediately. The space is then made available for writing by any user.

Technorati : , , , , ,