Working with Amazon Route53 and DNScurl

As of today, Amazon AWS does not provide console access to Route53 administration. DNS management is done thru REST API calls and a perl helper.

What you need:
  • DNScurl
  • AWS Access Key ID and Secret Access
1. Download and extract DNScurl.

2. Setup the credential file. Create a file named .aws-secrets and give it a 600 permission.

%awsSecretAccessKeys = (
    "Account1" => {
        key => "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",

I should mention that every request to AWS Route53 API servers should go in a form of an XML file. That is, to add DNS records, you create an XML describing the new DNS records. Now for actual DNS work.

Creating a hostedzone

A hostedzone is like creating an SOA entry in traditional DNS. We first create init.xml:

Then we submit the request like so:

$ ./ --keyname Account1 -- -H "Content-Type: text/xml; charset=UTF-8" -X POST --upload-file init.xml

Output should be something like:
This may seem overwhelming but it's actually simple. Each hostedzone belongs to a unique id as seen in ID tag (line 4). We can use the ID inside the ChangeInfo tag to check of the requested transaction has been completed already (line 12). We will discuss about checking the status of requests a bit later. You will use the name servers inside the NameServers tag to transfer control from your current DNS hosting provider to AWS Route53 (lines 18 - 21).

Checking the status of changes

The same procedure applies when adding DNS records.
$ ./ --keyname Account1 -- -H "Content-Type: text/xml; charset=UTF-8" -X GET

Notice the value in the Status tag. INSYNC means the changes has been made and that records are synchronized among Route53's DNS servers.

Adding DNS entries

Now that that hostedzone/SOA is setup. It is time to create the DNS entries. Here is a sample XML that covers most type of DNS records for a regular domain:

Save the file as records.xml

Send the request by:
$ ./ --keyname Account1 -- -H "Content-Type: text/xml; charset=UTF-8" -X POST --upload-file records.xml

Quick GlusterFS Server Setup

# Desired behavior:
# Two servers serve as GlusterFS Server. The two servers will
# replicate to each other. I actually think that its the
# client replicating to the two servers. One client (possibly
# a web server is reading/writing to the replicated server.

# Server Config for server1 and server2

volume posix
type storage/posix
option directory /home/export

volume locks
type features/locks
subvolumes posix

volume brick
type performance/io-threads
option thread-count 8
subvolumes locks

volume server
type protocol/server
option transport-type tcp
option transport.socket.nodelay on
option auth.addr.brick.allow *
subvolumes brick

# Client Config
# cat /etc/glusterfs/glusterfs.vol

volume remote1
type protocol/client
option transport-type tcp
option transport.socket.nodelay on
option remote-host
option remote-subvolume brick

volume remote2
type protocol/client
option transport-type tcp
option transport.socket.nodelay on
option remote-host
option remote-subvolume brick

volume replicate
type cluster/replicate
subvolumes remote1 remote2

volume writebehind
type performance/write-behind
option window-size 1MB
subvolumes replicate

volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind

# cat /etc/fstab
/etc/glusterfs/glusterfs.vol  /media/gluster  glusterfs  defaults  0  2

Working with Amazon EC2 API-tools

It's actually quite simple. You just need to know some basic concepts and the rest is common sense.

The concepts:

Regions and Availability Zones (from

Amazon EC2 provides the ability to place instances in multiple locations. Amazon EC2 locations are composed of Regions and Availability Zones. Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from failure of a single location. Regions consist of one or more Availability Zones, are geographically dispersed, and will be in separate geographic areas or countries. The Amazon EC2 Service Level Agreement commitment is 99.95% availability for each Amazon EC2 Region. Amazon EC2 is currently available in five regions: US East (Northern Virginia), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), and Asia Pacific (Tokyo).


In simple terms, an Instance is a virtual server running on top of a cloud provider, in our case Amazon AWS. There are several types of instances:

Depending on application requirements, you may choose to run the smallest instance or a big one.

Linux Workstation (mine is Ubuntu 10.10)
API Tools (

So why work from command-line?
+ It's faster to work from CLI
+ Some of the AWS/EC2 features are only available from the API tools

Part 1: API Tools Installation

1. Download API Tools from the link above to your home directory (eg. /home/juan)
2. Extract the downloaded file
unzip -d ec2

This will create an ec2 folder containing the api files.

3. Make sure you have Java JRE
apt-get install sun-java6-jre

4. Setup environment variables. In your at ~/.bashrc you will need to put the following configuration -- at the bottom of the file is fine.
export EC2_HOME=/home/juan/ec2
export PATH=$PATH:$EC2_HOME/bin
export JAVA_HOME=/usr
export EC2_PRIVATE_KEY=/home/juan/pk-xxxxxx.pem
export EC2_CERT=/home/juan/cert-yyyyyy.pem
export EC2_URL=
EC2_HOME is where you extracted the API files
EC2_PRIVATE_KEY is the private key file from AWS Console -> Account -> Security Credentials -> Access Credentials
EC2_CERT is from the same location. You need the Private Key and Certificate for the API to communicate with AWS
EC2_URL depends on where you will be deploying your instances

Here is a list of possible EC2_URL:

5. Test.
juan@the1:~/ec2$ ec2-describe-regions
REGION eu-west-1
REGION us-east-1
REGION ap-northeast-1
REGION us-west-1
REGION ap-southeast-1
If you see the a similar output as above. You are now in business.

Part 2: Working with CLI

# List Regions and Availabibility Zones

# Create Security Group / Add Rules to Security Group
ec2-create-group <GroupName> -d "Web Servers"
ec2-authorize <GroupName> -P tcp -p 80 -s
ec2-authorize <GroupName> -P tcp -p 3306 -o <GroupName>

# List Groups
ec2-describe-group <GroupName>

# Remove Rule / Delete Group
ec2-revoke <GroupName> -P tcp -p 80 -s
ec2-revoke <GroupName> -P tcp -p 3306 -o Webs
ec2-delete-group <GroupName>

# Key-Pairs
ec2-create-keypair <key-pair name>
ec2-delete-keypair <key-pair name>

# Create keypair from linux
ssh-keygen -b 2048 -t rsa -f <key-pair name>

# Import Keys (if you want to you your own keys to login to your instances)
ec2-import-keypair <key-pair name> --public-key-file .ssh/

# run instance 
ec2-run-instance <ami-id> -n <count> -g <security group> -k <key-pair name> -t <instance type> --availability-zone <av-zone> --instance-initiated-shutdown-behavior stop 

Other switches:

-f user data
-b block device mapping

# Console
ec2-get-console-output  <instance id>

# List Instances, see above list for EC2_URL

# Elastic IP
ec2-associate-address <ip> -i <instance id>
ec2-disassociate-address <ip address>
ec2-release-address <ip address>

# Terminate instance
ec2-terminate-instances <instance id>

# Start / Stop instance (for EBS-based intances)
ec2-start-instances <instance id>
ec2-stop-instance <instance id>

# Reboot instance
ec2-reboot-instances <instance id> 

# EBS Volumes
ec2-create-volume  --size <size-GB> --availability-zone <av-zone>
ec2-attach-volume <vol-id> -i <instance id> -d /dev/xvdf
ec2-detach-volume <vol-id>
ec2-delete-volume <vol-id>
ec2-create-snapshot <vol-id> -d "Description"

Installing Perl Modules, the easy way

Here's a quick way to install perl modules. The module that I'm trying to install is Cache::Cache

# yum -y install perl-CPAN
# perl -MCPAN -e 'shell'

Then you will be prompted:

Would you like me to configure as much as possible automatically? [yes] yes

cpan> o conf urllist push ""
cpan> reload index
cpan> o conf commit
cpan> look Cache::Cache

Then you will be dropped to a shell that looks something like this:

[root@ip-10-128-93-159 Cache-Cache-1.06-GkW_ap]#

Then do:

# perl Makefile.PL
# make
# make install

Type exit to go back to cpan prompt and exit again to go to command prompt.

My test environment was on an Amazon EC2 AMI. The procedure could also work on any Redhat or CentOS distro. I only recommend this procedure for installing when the perl module involved is not available in the default repo or if installing from repo will cause package conflicts. Otherwise use yum install the module.