Amazon VPC IPSec + BGP with ipsec-tools

First time to use github's gists to embed code here. This little weekend project will connect your Ubuntu 12.04 server to Amazon VPC through IPSec.

 Usage:

./vpcstart.sh [amazon-generic-config-file.txt]

Where the parameter is the "Generic" and "Vendor Agnostic" config file downloaded from the Amazon Console. You also need to change REMOTE_NET and WAN_INT variables to suit your needs.

You do need the following for IPSec to work:

  • Public and static IP Address
  • Open ports for UDP 500, protocol AH, protocol ESP,  TCP 179 for BGP
This script has been test with Ubuntu 12.04.

EC2 Autoscaling Dynamic De/Registration for HAProxy

This is a work-in-progress script I whipped up over the weekend. This assumes the following architecture:

  • For a 3-tier architecture
  • Autoscaling Group may or may not run on multi-AZ
  • Each AZ will have a software loadbalancer
  • SNS and SQS are utilized for notification of scaling activities (Starting and Terminating of instances)
  • Script runs through through cron and most be installed in between the Web and App servers
  • Uses the string "# Begin" to mark where new HAProxy configs will go. So do an "echo '# Begin' >> /etc/haproxy/haproxy.cnf" before running this script
  • Step-by-step procedure on how to setup autoscaling is documented here.

#!/usr/bin/perl
# EC2 Autoscaling dynamic registration script for HAProxy
# Requirements: SNS topic, SQS subscription
# Notes: Supposed to be run as a cronjob
# John Homer H Alvero
# April 29, 2013
#
# Install pre-reqs by
# yum install perl-Amazon-SQS-Simple perl-Net-Amazon-EC2 --enablerepo=epel
use Amazon::SQS::Simple;
use Net::Amazon::EC2;
 
my $access_key   = '';
my $secret_key   = '';
my $queue_endpoint  = 'https://sqs.us-east-1.amazonaws.com/123456789012/yourqueue';
my $haproxy_file  = '/etc/haproxy/haproxy.cfg';
my $my_az  = `wget -qO- http://169.254.169.254/latest/meta-data/placement/availability-zone`;
 
# Create an SQS object
my $sqs = new Amazon::SQS::Simple($access_key, $secret_key);
 
# Connect to an existing queue
my $q = $sqs->GetQueue($queue_endpoint);
 
my $ec2 = Net::Amazon::EC2->new(AWSAccessKeyId => $access_key, SecretAccessKey => $secret_key);
 
# Retrieve a message
while (my $msg = $q->ReceiveMessage()) {
 $sqs_msg = $msg->MessageBody();
 
 # parse message, get instance id
 (my $action = $1, $instance_id = $2) if $sqs_msg =~ /(Terminating|Launching).+EC2InstanceId\\\"\:\\\"(i-.{8})/;
 
 # do action
 my $running_instances = $ec2->describe_instances(InstanceId => $instance_id);
 
 foreach my $reservation (@$running_instances) {
  foreach my $instance ($reservation->instances_set) {
   $pdns_name = $instance->private_dns_name;
   $instance_az = $reservation->instances_set->[0]->placement->availability_zone;
  }
 }
 
 if ($my_az eq $instance_az) { 
         if ($action eq "Launching") {
                 print "adding instance id $instance_id $pdns_name\n";
 
   # Get last app number
   $lastapp = `grep '\# Begin' $haproxy_file -A1000 | grep server | sort -k1 | cut -f6 -d' ' | tail -1`;
   chomp($lastapp);
   $lastapp = "app000" if $lastapp eq "";
   $lastapp++;
 
   # Update haproxy config file
   system("/bin/echo \"    server $lastapp $pdns_name:80 check # $instance_id\" >> $haproxy_file | service haproxy reload");
         } elsif ($action eq "Terminating") {
                 print "removing instance id $instance_id\n";
   system("sed -i \"/$instance_id/d\" $haproxy_file" | service haproxy reload);
         } else {
                 die("unhandled exception. exiting.\n");
         }
 
  # delete from queue
         $q->DeleteMessage($msg->ReceiptHandle());
 
 } else {
  print "$instance_id $instance_az does not belong to this AZ.\n";
 }
 
 # unset variables
 $instance_id = "";
 $launch_id = "";
 $action = "";
 $pdns_name = "";
 $instance_az = "";
}

nginx + python + uwsgi + django + virtualenv + virtualenvwrapper

For Systems Engineers coming in from PHP world, installing and configuring the software stack needed to run python + django applications can be a daunting task specially when dealing with multiple python versions. Or, when the operating system python version is not compatible with what is required by the python web application. Here's how I did it with Ubuntu:

  1. Set locale and timezone
    locale-gen en_US.UTF-8
    echo "Asia/Singapore" >  /etc/timezone
    dpkg-reconfigure --frontend noninteractive tzdata
  2. Update packages
    apt-get update && apt-get upgrade -y
  3. Limits
    ulimit -n 20000
    echo 'fs.file-max = 200000' >> /etc/sysctl.d/20_nginx.conf

Migrate a Xen Instance to VMware

This procedure decribes the process of migrating an instance/phyical server to another instance/phyical server. There are a number of ways on doing the same e.g. vCenter Converter but contrary to the post title, if you want a generic and hypervisor agnostic way, this is it!

The whole idea is to copy (thru SSH) from the source instance to the destination instance (the one you are currently logged-on in rescue mode). Before doing the actual copy thru SSH, the disk you are copying into, should have been partitioned and formatted as described in the process.

My test environment is the following:

The source DomU is run through PyGrub. So if you're not doing PyGrub, you may have to install a kernel before you reboot.

  • Source Hypervisor: Xen 4.x
  • Destination Hypervisor: VMware ESXI 5.1
  • Source OS: CentOS 6.3 with LVM root partition, ext4
  • Destination OS: Same, no LVM, ext4
  1. On the destination hypervisor, create a VM with the desired CPU/memory setup. Make sure to add a CDROM device with an ISO file source (i used CentOS netinstall iso). Also, make sure that the VM boots from this CDROM
  2. Boot to rescue mode. Enable networking. Don't mount the root partition (we'll do that in the next step)
  3. Partition, format and mount the target disk to /mnt/sysimage. During partitioning, don't forget to tag the partition with a bootable flag
    fdisk /dev/sda
    mkfs.ext4 /dev/sda1
    mkdir /mnt/sysimage && mount /dev/sda1 /mnt/sysimage
  4. Copy the files (OS) from source instance to the target disk. This is where all the magic happen
    cd /mnt/sysimage
     
    ssh root@source_instance 'cd /; tar -zcvf - --exclude=dev --exclude=proc --exclude=media --exclude=mnt --exclude=sys *' | tar -zxvf -
  5. Chroot to the target root folder and mount dev, proc and sys folders
    mkdir dev proc sys
    mount -o bind /dev dev
    chroot .
    mount -t proc none /proc
    mount -t sysfs none /sys
  6. In order for the new system to run, modify the following files according to new hardware configuration
    /etc/fstab
    /boot/grub/grub.conf
    /boot/grub/device.map
  7. Populate /etc/mtab
    grep -v rootfs /proc/mounts > /etc/mtab
  8. Install grub
    grub-install /dev/sda
  9. Reboot!
After the reboot. You might want to do the following:
  1. Make sure you set a different IP in /etc/sysconfig/network-scripts/ifcfg-eth0
  2. Delete /etc/udev/rules.d/70-persistent-net.rules. This will be recreate at boot-time
  3. Update the system to receive the latest kernel

CLI Cloudfront Invalidation


Multi-object CloudFront invalidation can be done inside the AWS Console already. But if you are like me who wishes to use the CLI, here's how its done.

  1. Download cfcurl.pl
    mkdir -p ~/aws/invalidation && cd !$
    wget http://d1nqj4pxyrfw2.cloudfront.net/cfcurl.pl
    chmod +x cfcurl.pl
    
  2. Setup credential file. Cfcurl by default will lookup .aws-secrets file at cfcurl.pl's current directory. But you can also put the same file at your home directory
    cat << 'EOF' > .aws-secrets
    %awsSecretAccessKeys = (
        # primary account
        'primary' => {
            id => 'Your Access Key ID', 
            key => 'Your Secret Access Key',
        },
    );
    EOF
    chmod 600 .aws-secrets
    
  3. Make the invalidation request file. This should be a valid XML
    cat << 'EOF' > files.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <InvalidationBatch>
    <CallerReference>201210162</CallerReference>
    <Path>/path/to/file1.jpg</Path>
    <Path>/path/to/file2.jpg</Path>
    <Path>/path/to/file3.jpg</Path>
    <Path>/path/to/file4.jpg</Path>
    </InvalidationBatch>
    EOF
    
    The example above assumes that your cloudfront url looks similar to http://d46bt2172abc1.cloudfront.net/path/to/file1.jpg

    Also note that the CallerReference should be unique value per request. The current unix timestamp should do the trick.

  4. Submit the request. Make sure that you substitute [distribution ID] with your own. You can get this information from the AWS Console
    ./cfcurl.pl --keyname primary -- -X POST -H "Content-Type: text/xml; charset=UTF-8" --upload-file files.xml https://cloudfront.amazonaws.com/2012-07-01/distribution/[distribution ID]/invalidation
    
    The XML output should indicate an "InProgress" status. Otherwise, check the credential file and the invalidation request file. Upon successful submission, the actual invalidation may take 5 to 15 minutes. You may check the status of each invalidation request by following the next step.

  5. Check invalidation status. Substitute [distribution ID] and [request ID]. You can get the [request ID] from the invalidation request output in the previous step ( tag)
    ./cfcurl.pl --keyname primary -- -X GET https://cloudfront.amazonaws.com/2012-07-01/distribution/[distribution ID]/invalidation/[request ID]
    


Invalidation Request Generator


If you have a local copy of the static files directory tree and if you want to invalidate a lot of files, you can use the following script to generate files.xml. Note that the current limit for invalidation is 3 concurrent requests and 1000 files per request.
#!/usr/bin/perl

use File::Basename;
use File::Find;
use POSIX qw(strftime);

$usage = <<"";
Usage:
$0 srcdir
srcdir is the associated mathjax source directory on this machine

$SRCDIR = shift or die $usage;
$CALLER_REF = strftime("%Y%m%d%H%M%S", localtime);

($fname, $abs_srcdir) = fileparse("$SRCDIR/");
chop $abs_srcdir;

sub handler {
my $path = shift;
$path =~ s/\ /%20/g;
print <<TOUT;
<Path>/${path}</Path>
TOUT
}

# Generate output
print <<TOUT;
<?xml version="1.0" encoding="UTF-8"?>
<InvalidationBatch>
<CallerReference>${CALLER_REF}</CallerReference>
TOUT

find(\&wanted, $abs_srcdir);

print <<TOUT;
</InvalidationBatch>
TOUT

sub wanted { # Reject non-files, and anything in .git or fonts dirs
 $_ = $File::Find::name;
 -f or return;
 s/^${abs_srcdir}\///;
 /^.git\// and return;
 /^fonts\// and return;
 /.svn\// and return;
 /^.DS_Store\// and return;
 /^rename.py\// and return;
 handler($_);
}
Download File
Usage:

./purge_request.pl /folder/to/static/files > files.xml

Redmine 2.0 + Apache

Redmine is a flexible project management web application. Written using the Ruby on Rails framework, it is cross-platform and cross-database.
OS: CentOS 6.2

  • Install packages
    yum install ruby-devel gcc-c++ openssl-devel httpd httpd-devel mysql-server  mysql-devel make ruby-rdoc libcurl-devel rubygem-rake ImageMagick ImageMagick-devel wget
  • Rubygems
    cd /tmp
    wget http://production.cf.rubygems.org/rubygems/rubygems-1.8.24.tgz
    tar xvfz rubygems-1.8.24.tgz
    cd rubygems-1.8.24
    ruby setup.rb
  • Passenger
    gem install passenger
    passenger-install-apache2-module

HPCloud CDN traceroutes

HPCloud CDN trace:

# From hpcloud instance

 1:  10.4.66.156       0.194ms pmtu 1500
 1:  10.4.0.1          0.272ms 
 1:  10.4.0.1          0.200ms 
 2:  no reply
 3:  no reply
 4:  67.134.135.133    1.569ms asymm  5 
 5:  4.69.133.110      6.068ms 
 6:  67.131.38.50     19.235ms reached
     Resume: pmtu 1500 hops 6 back 59 
     
# From Singtel (Singapore)

 1:  enigma.local                                          0.138ms pmtu 1500
 1:  10.10.33.1                                            1.064ms asymm  2 
 1:  10.10.33.1                                            0.943ms asymm  2 
 2:  [fake-ip]                                            3.412ms asymm  3 
 3:  [fake-ip]                                             4.352ms asymm  4 
 4:  [fake-ip]                                             3.848ms asymm  5 
 5:  165.21.12.4                                           4.470ms asymm  6 
 6:  203.208.190.21                                        4.459ms asymm  7 
 7:  so-6-0-0-0.sngtp-ar6.ix.singtel.com                   4.409ms asymm  8 
 8:  ge-7-0-0-0.sngtp-dr1.ix.singtel.com                   5.117ms asymm  9 
 9:  203.208.171.194                                      54.528ms asymm 10 
10:  no reply
11:  no reply
12:  58.26.1.18                                           54.925ms reached
     Resume: pmtu 1500 hops 12 back 54 
     
# From AWS US-West

 1:  ip-10-168-107-78.us-west-1.compute.internal (10.168.107.78)   0.145ms pmtu 1500
 1:  ip-10-168-104-2.us-west-1.compute.internal (10.168.104.2)   0.481ms asymm  2 
 1:  ip-10-168-104-2.us-west-1.compute.internal (10.168.104.2)   0.782ms asymm  2 
 2:  ip-10-1-4-9.us-west-1.compute.internal (10.1.4.9)      0.522ms 
 3:  [fake-ip]                                              0.626ms 
 4:  [fake-ip]                                              1.522ms 
 5:  205.251.229.9 (205.251.229.9)                          1.407ms 
 6:  205.251.229.9 (205.251.229.9)                          1.416ms asymm  5 
 7:  no reply
 8:  a173-223-232-139.deploy.akamaitechnologies.com (173.223.232.139)   2.316ms reached
     Resume: pmtu 1500 hops 8 back 57 
     
# From Destiny Internet (Philippines)
 
 1:  192.168.10.30                                         0.140ms pmtu 1400
 1:  [fake-ip]                                            64.397ms 
 1:  [fake-ip]                                            64.493ms 
 2:  no reply
 3:  202.8.255.98                                         72.669ms 
 4:  sun2.mydestiny.net                                   90.557ms 
 5:  202.8.224.193                                        73.428ms 
 6:  202.8.224.201                                       278.546ms 
 7:  202.69.176.89                                        72.339ms 
 8:  202.69.190.86                                        85.670ms 
 9:  ge-4-0-1.GW2.LAX1.ALTER.NET                         250.366ms asymm 15 
10:  0.xe-2-0-0.XL3.LAX1.ALTER.NET                       248.123ms asymm 15 
11:  0.xe-3-1-0.XL3.LAX15.ALTER.NET                      250.239ms asymm 15 
12:  TenGigE0-6-2-0.GW4.LAX15.ALTER.NET                  242.218ms asymm 15 
13:  akamai.customer.alter.net                           243.776ms asymm 15 
14:  a65.197.244.50.deploy.akamaitechnologies.com        256.854ms reached
     Resume: pmtu 1400 hops 14 back 51 
     
# From Sakura Internet (Japan)

 1:  [fake-ip]                                               0.087ms pmtu 1500
 1:  [fake-ip]                                              2.953ms 
 2:  osnrt201b-nrt205e-1.bb.sakura.ad.jp (59.106.254.45)    1.974ms 
 3:  osdrt1-nrt201b.bb.sakura.ad.jp (59.106.255.121)        1.945ms 
 4:  oskrt1-drt1.bb.sakura.ad.jp (59.106.255.82)            1.955ms 
 5:  124.211.15.21 (124.211.15.21)                          4.993ms 
 6:  obpjbb203.kddnet.ad.jp (118.155.199.29)              asymm  7   1.990ms 
 7:  otejbb204.kddnet.ad.jp (203.181.99.65)               asymm  8  16.006ms 
 8:  kotjbb202.kddnet.ad.jp (118.155.198.78)              asymm  9  77.956ms 
 9:  cm-kot202.kddnet.ad.jp (125.29.22.26)                 10.974ms 
10:  125.29.31.238 (125.29.31.238)                         13.948ms 
11:  118.155.230.33 (118.155.230.33)                       14.944ms reached
     Resume: pmtu 1500 hops 11 back 11 
     
# From Starhub (Singapore)

 1:  enigma.local                                          0.136ms pmtu 1500
 1:  192.168.0.1                                           0.559ms 
 1:  192.168.0.1                                           0.664ms 
 2:  [fake-ip]                                            12.575ms 
 3:  172.20.23.1                                          10.572ms 
 4:  172.26.23.1                                          13.969ms 
 5:  172.20.7.14                                          18.053ms 
 6:  203.117.35.25                                        19.471ms 
 7:  sjo-bb1-link.telia.net                              216.206ms asymm  8 
 8:  GigabitEthernet2-0-0.GW4.SJC7.ALTER.NET             210.400ms asymm 15 
 9:  0.so-3-2-0.XL4.SJC7.ALTER.NET                       211.969ms asymm 15 
10:  0.ge-2-0-0.XL4.LAX15.ALTER.NET                      203.205ms asymm 14 
11:  TenGigE0-7-1-0.GW4.LAX15.ALTER.NET                  206.149ms asymm 12 
12:  akamai.customer.alter.net                           203.584ms asymm 13 
13:  a65.197.244.50.deploy.akamaitechnologies.com        202.144ms reached
     Resume: pmtu 1500 hops 13 back 51 

Bootstrap Ubuntu 11.10 Instances with Cloud-config User-Data and Puppet

Assuming you or HPCloud already worked out DNS' PTR issue and a puppet master host is setup, you can use the following user-data to bootstrap an Ubuntu server instance:

#cloud-config
 
puppet:
 conf:
   agent:
     server: "PUPPET-MASTER-FQDN"
 
   puppetd:
     listen: true
 
   # /var/lib/puppet/ssl/ca/ca_crt.pem on the puppetmaster host.
   ca_cert: |
     -----BEGIN CERTIFICATE-----
     PUT-CERT-HERE
     -----END CERTIFICATE-----
 
runcmd:
  - |-
    echo "path /run
    method save
    allow PUPPET-MASTER-FQDN" >> /etc/puppet/auth.conf
  - |-
    echo "[puppetrunner]
    allow PUPPET-MASTER-FQDN" >> /etc/puppet/namespaceauth.conf

Note: Make sure that the cloud-config user-data is a valid YAML file, go here to check.

To run a server instance with a User-Data, setup euca2ools CLI, then run:
euca-run-instances ami-000015cb -t standard.xsmall -g Web -k hpcloud -f user-data.txt 

All you need to do after is sign the certificate request at the puppet master host and do a puppetrun:
puppetca --list
puppetca --sign PUPPET-CLIENT-FQDN
puppetrun PUPPET-CLIENT-FQDN

If you goes well, you should be able to administer every aspect of an Ubuntu server instance from the puppet master host with manifest files.

HPCloud and euca2ools

Using euca2ools with HPCloud is simple. You just have to define environment variables containing your credentials and EC2_URL. Here's how:

export EC2_ACCESS_KEY=TENANT-ID:ACCESS-KEY-ID
export EC2_SECRET_KEY=SECRET-KEY
export EC2_URL=https://az-1.region-a.geo-1.ec2-compute.hpcloudsvc.com/services/Cloud
 
# Or, if you are in AZ-2
export EC2_URL=https://az-2.region-a.geo-1.ec2-compute.hpcloudsvc.com/services/Cloud
 
# Then test
euca-describe-regions
It is important to note that EC2_ACCESS_KEY is a combination of your Tenant ID and Access Key ID (separated by a colon) which can be found in your Account Tab.

Working with GlusterFS Console

OS: CentOS 6.2 (simulation done in HPCloud instances)
Firewall: off
Gluster Setup: 2 replica on 4 hosts (similar raid10)

  • Install Gluster. To get the latest version, I used the one from the upstream
    yum install compat-readline5-devel -y
     
    rpm -Uvh http://download.gluster.com/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-core-3.2.6-1.x86_64.rpm
    rpm -Uvh http://download.gluster.com/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-fuse-3.2.6-1.x86_64.rpm
  • For configuration simplicty. Let's add resolveable names for each server. Do this on all servers
    cat <<'EOF'>> /etc/hosts
    10.4.63.229 site1
    10.4.63.222 site2
    10.4.63.242 site3
    10.4.63.243 site4
    EOF