Migrate a Xen Instance to VMware

This procedure decribes the process of migrating an instance/phyical server to another instance/phyical server. There are a number of ways on doing the same e.g. vCenter Converter but contrary to the post title, if you want a generic and hypervisor agnostic way, this is it!

The whole idea is to copy (thru SSH) from the source instance to the destination instance (the one you are currently logged-on in rescue mode). Before doing the actual copy thru SSH, the disk you are copying into, should have been partitioned and formatted as described in the process.

My test environment is the following:

The source DomU is run through PyGrub. So if you're not doing PyGrub, you may have to install a kernel before you reboot.

  • Source Hypervisor: Xen 4.x
  • Destination Hypervisor: VMware ESXI 5.1
  • Source OS: CentOS 6.3 with LVM root partition, ext4
  • Destination OS: Same, no LVM, ext4
  1. On the destination hypervisor, create a VM with the desired CPU/memory setup. Make sure to add a CDROM device with an ISO file source (i used CentOS netinstall iso). Also, make sure that the VM boots from this CDROM
  2. Boot to rescue mode. Enable networking. Don't mount the root partition (we'll do that in the next step)
  3. Partition, format and mount the target disk to /mnt/sysimage. During partitioning, don't forget to tag the partition with a bootable flag
    fdisk /dev/sda
    mkfs.ext4 /dev/sda1
    mkdir /mnt/sysimage && mount /dev/sda1 /mnt/sysimage
  4. Copy the files (OS) from source instance to the target disk. This is where all the magic happen
    cd /mnt/sysimage
     
    ssh [email protected]_instance 'cd /; tar -zcvf - --exclude=dev --exclude=proc --exclude=media --exclude=mnt --exclude=sys *' | tar -zxvf -
  5. Chroot to the target root folder and mount dev, proc and sys folders
    mkdir dev proc sys
    mount -o bind /dev dev
    chroot .
    mount -t proc none /proc
    mount -t sysfs none /sys
  6. In order for the new system to run, modify the following files according to new hardware configuration
    /etc/fstab
    /boot/grub/grub.conf
    /boot/grub/device.map
  7. Populate /etc/mtab
    grep -v rootfs /proc/mounts > /etc/mtab
  8. Install grub
    grub-install /dev/sda
  9. Reboot!
After the reboot. You might want to do the following:
  1. Make sure you set a different IP in /etc/sysconfig/network-scripts/ifcfg-eth0
  2. Delete /etc/udev/rules.d/70-persistent-net.rules. This will be recreate at boot-time
  3. Update the system to receive the latest kernel

CLI Cloudfront Invalidation


Multi-object CloudFront invalidation can be done inside the AWS Console already. But if you are like me who wishes to use the CLI, here's how its done.

  1. Download cfcurl.pl
    mkdir -p ~/aws/invalidation && cd !$
    wget http://d1nqj4pxyrfw2.cloudfront.net/cfcurl.pl
    chmod +x cfcurl.pl
    
  2. Setup credential file. Cfcurl by default will lookup .aws-secrets file at cfcurl.pl's current directory. But you can also put the same file at your home directory
    cat << 'EOF' > .aws-secrets
    %awsSecretAccessKeys = (
        # primary account
        'primary' => {
            id => 'Your Access Key ID', 
            key => 'Your Secret Access Key',
        },
    );
    EOF
    chmod 600 .aws-secrets
    
  3. Make the invalidation request file. This should be a valid XML
    cat << 'EOF' > files.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <InvalidationBatch>
    <CallerReference>201210162</CallerReference>
    <Path>/path/to/file1.jpg</Path>
    <Path>/path/to/file2.jpg</Path>
    <Path>/path/to/file3.jpg</Path>
    <Path>/path/to/file4.jpg</Path>
    </InvalidationBatch>
    EOF
    
    The example above assumes that your cloudfront url looks similar to http://d46bt2172abc1.cloudfront.net/path/to/file1.jpg

    Also note that the CallerReference should be unique value per request. The current unix timestamp should do the trick.

  4. Submit the request. Make sure that you substitute [distribution ID] with your own. You can get this information from the AWS Console
    ./cfcurl.pl --keyname primary -- -X POST -H "Content-Type: text/xml; charset=UTF-8" --upload-file files.xml https://cloudfront.amazonaws.com/2012-07-01/distribution/[distribution ID]/invalidation
    
    The XML output should indicate an "InProgress" status. Otherwise, check the credential file and the invalidation request file. Upon successful submission, the actual invalidation may take 5 to 15 minutes. You may check the status of each invalidation request by following the next step.

  5. Check invalidation status. Substitute [distribution ID] and [request ID]. You can get the [request ID] from the invalidation request output in the previous step ( tag)
    ./cfcurl.pl --keyname primary -- -X GET https://cloudfront.amazonaws.com/2012-07-01/distribution/[distribution ID]/invalidation/[request ID]
    


Invalidation Request Generator


If you have a local copy of the static files directory tree and if you want to invalidate a lot of files, you can use the following script to generate files.xml. Note that the current limit for invalidation is 3 concurrent requests and 1000 files per request.
#!/usr/bin/perl

use File::Basename;
use File::Find;
use POSIX qw(strftime);

$usage = <<"";
Usage:
$0 srcdir
srcdir is the associated mathjax source directory on this machine

$SRCDIR = shift or die $usage;
$CALLER_REF = strftime("%Y%m%d%H%M%S", localtime);

($fname, $abs_srcdir) = fileparse("$SRCDIR/");
chop $abs_srcdir;

sub handler {
my $path = shift;
$path =~ s/\ /%20/g;
print <<TOUT;
<Path>/${path}</Path>
TOUT
}

# Generate output
print <<TOUT;
<?xml version="1.0" encoding="UTF-8"?>
<InvalidationBatch>
<CallerReference>${CALLER_REF}</CallerReference>
TOUT

find(\&wanted, $abs_srcdir);

print <<TOUT;
</InvalidationBatch>
TOUT

sub wanted { # Reject non-files, and anything in .git or fonts dirs
 $_ = $File::Find::name;
 -f or return;
 s/^${abs_srcdir}\///;
 /^.git\// and return;
 /^fonts\// and return;
 /.svn\// and return;
 /^.DS_Store\// and return;
 /^rename.py\// and return;
 handler($_);
}
Download File
Usage:

./purge_request.pl /folder/to/static/files > files.xml

Redmine 2.0 + Apache

Redmine is a flexible project management web application. Written using the Ruby on Rails framework, it is cross-platform and cross-database.
OS: CentOS 6.2

  • Install packages
    yum install ruby-devel gcc-c++ openssl-devel httpd httpd-devel mysql-server  mysql-devel make ruby-rdoc libcurl-devel rubygem-rake ImageMagick ImageMagick-devel wget
  • Rubygems
    cd /tmp
    wget http://production.cf.rubygems.org/rubygems/rubygems-1.8.24.tgz
    tar xvfz rubygems-1.8.24.tgz
    cd rubygems-1.8.24
    ruby setup.rb
  • Passenger
    gem install passenger
    passenger-install-apache2-module

HPCloud CDN traceroutes

HPCloud CDN trace:

# From hpcloud instance

 1:  10.4.66.156       0.194ms pmtu 1500
 1:  10.4.0.1          0.272ms 
 1:  10.4.0.1          0.200ms 
 2:  no reply
 3:  no reply
 4:  67.134.135.133    1.569ms asymm  5 
 5:  4.69.133.110      6.068ms 
 6:  67.131.38.50     19.235ms reached
     Resume: pmtu 1500 hops 6 back 59 
     
# From Singtel (Singapore)

 1:  enigma.local                                          0.138ms pmtu 1500
 1:  10.10.33.1                                            1.064ms asymm  2 
 1:  10.10.33.1                                            0.943ms asymm  2 
 2:  [fake-ip]                                            3.412ms asymm  3 
 3:  [fake-ip]                                             4.352ms asymm  4 
 4:  [fake-ip]                                             3.848ms asymm  5 
 5:  165.21.12.4                                           4.470ms asymm  6 
 6:  203.208.190.21                                        4.459ms asymm  7 
 7:  so-6-0-0-0.sngtp-ar6.ix.singtel.com                   4.409ms asymm  8 
 8:  ge-7-0-0-0.sngtp-dr1.ix.singtel.com                   5.117ms asymm  9 
 9:  203.208.171.194                                      54.528ms asymm 10 
10:  no reply
11:  no reply
12:  58.26.1.18                                           54.925ms reached
     Resume: pmtu 1500 hops 12 back 54 
     
# From AWS US-West

 1:  ip-10-168-107-78.us-west-1.compute.internal (10.168.107.78)   0.145ms pmtu 1500
 1:  ip-10-168-104-2.us-west-1.compute.internal (10.168.104.2)   0.481ms asymm  2 
 1:  ip-10-168-104-2.us-west-1.compute.internal (10.168.104.2)   0.782ms asymm  2 
 2:  ip-10-1-4-9.us-west-1.compute.internal (10.1.4.9)      0.522ms 
 3:  [fake-ip]                                              0.626ms 
 4:  [fake-ip]                                              1.522ms 
 5:  205.251.229.9 (205.251.229.9)                          1.407ms 
 6:  205.251.229.9 (205.251.229.9)                          1.416ms asymm  5 
 7:  no reply
 8:  a173-223-232-139.deploy.akamaitechnologies.com (173.223.232.139)   2.316ms reached
     Resume: pmtu 1500 hops 8 back 57 
     
# From Destiny Internet (Philippines)
 
 1:  192.168.10.30                                         0.140ms pmtu 1400
 1:  [fake-ip]                                            64.397ms 
 1:  [fake-ip]                                            64.493ms 
 2:  no reply
 3:  202.8.255.98                                         72.669ms 
 4:  sun2.mydestiny.net                                   90.557ms 
 5:  202.8.224.193                                        73.428ms 
 6:  202.8.224.201                                       278.546ms 
 7:  202.69.176.89                                        72.339ms 
 8:  202.69.190.86                                        85.670ms 
 9:  ge-4-0-1.GW2.LAX1.ALTER.NET                         250.366ms asymm 15 
10:  0.xe-2-0-0.XL3.LAX1.ALTER.NET                       248.123ms asymm 15 
11:  0.xe-3-1-0.XL3.LAX15.ALTER.NET                      250.239ms asymm 15 
12:  TenGigE0-6-2-0.GW4.LAX15.ALTER.NET                  242.218ms asymm 15 
13:  akamai.customer.alter.net                           243.776ms asymm 15 
14:  a65.197.244.50.deploy.akamaitechnologies.com        256.854ms reached
     Resume: pmtu 1400 hops 14 back 51 
     
# From Sakura Internet (Japan)

 1:  [fake-ip]                                               0.087ms pmtu 1500
 1:  [fake-ip]                                              2.953ms 
 2:  osnrt201b-nrt205e-1.bb.sakura.ad.jp (59.106.254.45)    1.974ms 
 3:  osdrt1-nrt201b.bb.sakura.ad.jp (59.106.255.121)        1.945ms 
 4:  oskrt1-drt1.bb.sakura.ad.jp (59.106.255.82)            1.955ms 
 5:  124.211.15.21 (124.211.15.21)                          4.993ms 
 6:  obpjbb203.kddnet.ad.jp (118.155.199.29)              asymm  7   1.990ms 
 7:  otejbb204.kddnet.ad.jp (203.181.99.65)               asymm  8  16.006ms 
 8:  kotjbb202.kddnet.ad.jp (118.155.198.78)              asymm  9  77.956ms 
 9:  cm-kot202.kddnet.ad.jp (125.29.22.26)                 10.974ms 
10:  125.29.31.238 (125.29.31.238)                         13.948ms 
11:  118.155.230.33 (118.155.230.33)                       14.944ms reached
     Resume: pmtu 1500 hops 11 back 11 
     
# From Starhub (Singapore)

 1:  enigma.local                                          0.136ms pmtu 1500
 1:  192.168.0.1                                           0.559ms 
 1:  192.168.0.1                                           0.664ms 
 2:  [fake-ip]                                            12.575ms 
 3:  172.20.23.1                                          10.572ms 
 4:  172.26.23.1                                          13.969ms 
 5:  172.20.7.14                                          18.053ms 
 6:  203.117.35.25                                        19.471ms 
 7:  sjo-bb1-link.telia.net                              216.206ms asymm  8 
 8:  GigabitEthernet2-0-0.GW4.SJC7.ALTER.NET             210.400ms asymm 15 
 9:  0.so-3-2-0.XL4.SJC7.ALTER.NET                       211.969ms asymm 15 
10:  0.ge-2-0-0.XL4.LAX15.ALTER.NET                      203.205ms asymm 14 
11:  TenGigE0-7-1-0.GW4.LAX15.ALTER.NET                  206.149ms asymm 12 
12:  akamai.customer.alter.net                           203.584ms asymm 13 
13:  a65.197.244.50.deploy.akamaitechnologies.com        202.144ms reached
     Resume: pmtu 1500 hops 13 back 51 

Bootstrap Ubuntu 11.10 Instances with Cloud-config User-Data and Puppet

Assuming you or HPCloud already worked out DNS' PTR issue and a puppet master host is setup, you can use the following user-data to bootstrap an Ubuntu server instance:

#cloud-config
 
puppet:
 conf:
   agent:
     server: "PUPPET-MASTER-FQDN"
 
   puppetd:
     listen: true
 
   # /var/lib/puppet/ssl/ca/ca_crt.pem on the puppetmaster host.
   ca_cert: |
     -----BEGIN CERTIFICATE-----
     PUT-CERT-HERE
     -----END CERTIFICATE-----
 
runcmd:
  - |-
    echo "path /run
    method save
    allow PUPPET-MASTER-FQDN" >> /etc/puppet/auth.conf
  - |-
    echo "[puppetrunner]
    allow PUPPET-MASTER-FQDN" >> /etc/puppet/namespaceauth.conf

Note: Make sure that the cloud-config user-data is a valid YAML file, go here to check.

To run a server instance with a User-Data, setup euca2ools CLI, then run:
euca-run-instances ami-000015cb -t standard.xsmall -g Web -k hpcloud -f user-data.txt 

All you need to do after is sign the certificate request at the puppet master host and do a puppetrun:
puppetca --list
puppetca --sign PUPPET-CLIENT-FQDN
puppetrun PUPPET-CLIENT-FQDN

If you goes well, you should be able to administer every aspect of an Ubuntu server instance from the puppet master host with manifest files.

HPCloud and euca2ools

Using euca2ools with HPCloud is simple. You just have to define environment variables containing your credentials and EC2_URL. Here's how:

export EC2_ACCESS_KEY=TENANT-ID:ACCESS-KEY-ID
export EC2_SECRET_KEY=SECRET-KEY
export EC2_URL=https://az-1.region-a.geo-1.ec2-compute.hpcloudsvc.com/services/Cloud
 
# Or, if you are in AZ-2
export EC2_URL=https://az-2.region-a.geo-1.ec2-compute.hpcloudsvc.com/services/Cloud
 
# Then test
euca-describe-regions
It is important to note that EC2_ACCESS_KEY is a combination of your Tenant ID and Access Key ID (separated by a colon) which can be found in your Account Tab.

Working with GlusterFS Console

OS: CentOS 6.2 (simulation done in HPCloud instances)
Firewall: off
Gluster Setup: 2 replica on 4 hosts (similar raid10)

  • Install Gluster. To get the latest version, I used the one from the upstream
    yum install compat-readline5-devel -y
     
    rpm -Uvh http://download.gluster.com/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-core-3.2.6-1.x86_64.rpm
    rpm -Uvh http://download.gluster.com/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-fuse-3.2.6-1.x86_64.rpm
  • For configuration simplicty. Let's add resolveable names for each server. Do this on all servers
    cat <<'EOF'>> /etc/hosts
    10.4.63.229 site1
    10.4.63.222 site2
    10.4.63.242 site3
    10.4.63.243 site4
    EOF

HPCloud Command Line Interface (CLI) Installation

Details the step-by-step procedure on installing HPCloud CLI on Ubuntu 11.10 or CentOS 6.2.
  • Pre-req
    # For Ubuntu 11.10
    apt-get install -y git gcc make zlib1g-dev libssl-dev libreadline-gplv2-dev libxml2-dev libsqlite3-dev libxslt1-dev
    
    # For CentOS 6.2
    yum install -y gcc-c++ patch readline readline-devel zlib zlib-devel libyaml-devel libffi-devel openssl-devel make bzip2 autoconf automake libtool bison iconv-devel libxslt-devel sqlite-devel libxml2-devel
    

Redmine on EC2 Cloud using Alami 2012.03

Redmine is a flexible project management web application. Written using the Ruby on Rails framework, it is cross-platform and cross-database.


OS: Alami 2012.03

Install Procedure

  • Install packages
    yum install ruby-devel gcc-c++ openssl-devel httpd httpd-devel mysql-server  mysql-devel make ruby-rdoc libcurl-devel rubygem-rake
  • Rubygems. Version 1.6.2 is the current sweet spot. Using the latest version will result to “depreciated errors in apache error logs”. Using an older version will prevent you from installing bundle
    cd /tmp/
    wget http://production.cf.rubygems.org/rubygems/rubygems-1.6.2.tgz
    tar xvfz rubygems-1.6.2.tgz
    cd rubygems-1.6.2
    ruby setup.rb
  • Passenger
    gem install passenger
    passenger-install-apache2-module
  • Load the passenger apache module. Add the following config in /etc/httpd/conf/httpd.conf
    LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.11/ext/apache2/mod_passenger.so
    PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.11
    PassengerRuby /usr/bin/ruby
  • Setup apache vhost
    cat <<'EOF' > /etc/httpd/conf.d/redmine.conf
    <VirtualHost *:80>
     ServerName redmine.local
     DocumentRoot /var/www/redmine/public/
     
     <Directory "/var/www/redmine/public/">
      Options Indexes ExecCGI FollowSymLinks
      AllowOverride all
      Order allow,deny
      Allow from all
     </Directory>
    </VirtualHost>
    EOF
    
    Note that redmine.local is a local domain and is a manual entry in my workstation's /etc/hosts file. This is done for testing purposes only. For production systems, this may very well be a subdomain under you company's domain name e.g. redmine.acme.com
  • Download and extract redmine
    cd /tmp
    wget http://rubyforge.org/frs/download.php/75910/redmine-1.3.2.tar.gz
    tar xvfz redmine-1.3.2.tar.gz
    mkdir /var/www/redmine
    cp -a redmine-1.3.2/* /var/www/redmine
     
    chown -R apache.apache /var/www/redmine && chmod -R 755 /var/www/redmine
      
    touch /var/www/redmine/log/production.log
    chown root.apache /var/www/redmine/log/production.log
    chmod 664 /var/www/redmine/log/production.log
  • Prep Gemfile dependency
    cat <<EOF> /var/www/redmine/Gemfile
    source "http://rubygems.org" 
    gem "rake", "0.8.3" 
    gem "rack", "1.1.0" 
    gem "i18n", "0.4.2" 
    gem "rubytree", "0.5.2", :require => "tree" 
    gem "RedCloth", "~>4.2.3", :require => "redcloth" # for CodeRay
    gem "mysql" 
    gem "coderay", "~>0.9.7" 
    EOF
    
  • Bundle
    gem install bundler
    cd /var/www/redmine/
    bundle install
  • Move CGI files
    cd /var/www/redmine/public/
    mv dispatch.cgi.example dispatch.cgi
    mv dispatch.fcgi.example dispatch.fcgi
    mv dispatch.rb.example dispatch.rb
    mv htaccess.fcgi.example .htaccess
  • Set rails to production environment in /var/www/redmine/config/environment.rb
    ENV['RAILS_ENV'] ||= 'production'
  • Setup MySQL DB
    service mysqld start
    chkconfig mysqld on
    /usr/bin/mysql_secure_installation
    mysql -uroot -p -e 'create database redmine character set utf8; grant all on redmine.* to [email protected] identified by "my_passwd";flush privileges';
  • Setup redmine database connection
    mv /var/www/redmine/config/database.yml.example /var/www/redmine/config/database.yml
    vi /var/www/redmine/config/database.yml
     
    # In the production section, update username, password and other parameters accordingly like so:
     
    production:
      adapter: mysql
      database: redmine
      host: localhost
      username: redmine
      password: my_passwd
      encoding: utf8
  • Create session store
    cd /var/www/redmine
    RAILS_ENV=production bundle exec rake generate_session_store
  • Migrate database models
    RAILS_ENV=production bundle exec rake db:migrate
  • Load MySQL database schema and default data
    RAILS_ENV=production bundle exec rake redmine:load_default_data
  • and finally, start Apache
    service httpd start
    chkconfig httpd on 
  • you may now open and point your browser to http://redmine.local and login as admin/admin

LDAP Server Installation for openssh-lpk clients

Since OpenLDAP version 2.3, configuration through cn=config is supported. It is also known as run-time configuration (RTC) or zero downtime configuration.

In accomplishing this task, we will use a cn=config type of configuration since by default, Amazon's Official Linux Ami (ALAMI 2012.03) uses this type.


OS: Alami 2012.03 / CentOS 6.2

Objectives

  • Centralize the administration of linux accounts
  • Centralize the administration of sudo access
  • Use public keys

OpenLDAP Config

  1. Update the system. Fix timezone
    yum -y update
    echo -e "ZONE=Asia/Singapore\nUTC=false" > /etc/sysconfig/clock
    ln -sf /usr/share/zoneinfo/Asia/Singapore /etc/localtime
  2. Install LDAP packages
    yum install openldap-servers openldap-clients -y
  3. Generate the admin password
    $ slappasswd -s mysecret
    {SSHA}IwmKUosglAO6RpcjGDYm04HUu0VgWP0Y
    Note: mysecret will now be your Manager password. You will use this password to execute administrative commands. Displayed after is the corresponding hash. Use the hash in succeeding steps.
  4. TLS settings
    sed -i 's/dc=my-domain,dc=com/dc=johnalvero,dc=com/g' /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{2\}bdb.ldif
     
    # Also, add the password and TLS settings in the file
    cat <<'EOF'>> /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{2\}bdb.ldif
    olcRootPW: {SSHA}IwmKUosglAO6RpcjGDYm04HUu0VgWP0Y
    olcTLSCertificateFile: /etc/pki/tls/certs/slapdcert.pem
    olcTLSCertificateKeyFile: /etc/pki/tls/certs/slapdkey.pem
    EOF
  5. Also add a password for “cn=admin,cn=config” user
    cat <<'EOF'>> /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{0\}config.ldif
    olcRootPW: {SSHA}IwmKUosglAO6RpcjGDYm04HUu0VgWP0Y
    EOF
  6. Monitor configuration
    sed -i 's/cn=manager,dc=my-domain,dc=com/cn=Manager,dc=johnalvero,dc=com/g' /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{1\}monitor.ldif
  7. DB config
    cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
    chown -R ldap:ldap /var/lib/ldap/
  8. Generate SSL keys
    openssl req -new -x509 -nodes -out /etc/pki/tls/certs/slapdcert.pem -keyout /etc/pki/tls/certs/slapdkey.pem -days 365
    chown -Rf root.ldap /etc/pki/tls/certs/slapdcert.pem 
    chown -Rf root.ldap /etc/pki/tls/certs/slapdkey.pem

Schemas

  1. Add openssh-lpk shema
    cat <<'EOF'> /etc/openldap/slapd.d/cn=config/cn=schema/cn={21}openssh-lpk.ldif
    dn: cn={21}openssh-lpk
    objectClass: olcSchemaConfig
    cn: {21}openssh-lpk
    olcAttributeTypes: {0}( 1.3.6.1.4.1.24552.500.1.1.1.13 NAME 'sshPublicKey' DES
     C 'MANDATORY: OpenSSH Public key' EQUALITY octetStringMatch SYNTAX 1.3.6.1.4.
     1.1466.115.121.1.40 )
    olcObjectClasses: {0}( 1.3.6.1.4.1.24552.500.1.1.2.0 NAME 'ldapPublicKey' DESC
      'MANDATORY: OpenSSH LPK objectclass' SUP top AUXILIARY MAY ( sshPublicKey $ 
     uid ) )
    structuralObjectClass: olcSchemaConfig
    entryUUID: 135574f4-bda0-102f-9362-0b01757f31d8
    creatorsName: cn=config
    createTimestamp: 20110126135819Z
    entryCSN: 20110126135819.712350Z#000000#000#000000
    modifiersName: cn=config
    modifyTimestamp: 20110126135819Z
    EOF
  2. Add the sudoers schema
    cat<<'EOF'> /etc/openldap/slapd.d/cn=config/cn=schema/cn={23}sudo.ldif
    dn: cn={23}sudo
    objectClass: olcSchemaConfig
    cn: {23}sudo
    olcAttributeTypes: {0}( 1.3.6.1.4.1.15953.9.1.1 NAME 'sudoUser' DESC 'User(s) 
     who may  run sudo' EQUALITY caseExactIA5Match SUBSTR caseExactIA5SubstringsMa
     tch SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 )
    olcAttributeTypes: {1}( 1.3.6.1.4.1.15953.9.1.2 NAME 'sudoHost' DESC 'Host(s) 
     who may run sudo' EQUALITY caseExactIA5Match SUBSTR caseExactIA5SubstringsMat
     ch SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 )
    olcAttributeTypes: {2}( 1.3.6.1.4.1.15953.9.1.3 NAME 'sudoCommand' DESC 'Comma
     nd(s) to be executed by sudo' EQUALITY caseExactIA5Match SYNTAX 1.3.6.1.4.1.1
     466.115.121.1.26 )
    olcAttributeTypes: {3}( 1.3.6.1.4.1.15953.9.1.4 NAME 'sudoRunAs' DESC 'User(s)
      impersonated by sudo' EQUALITY caseExactIA5Match SYNTAX 1.3.6.1.4.1.1466.115
     .121.1.26 )
    olcAttributeTypes: {4}( 1.3.6.1.4.1.15953.9.1.5 NAME 'sudoOption' DESC 'Option
     s(s) followed by sudo' EQUALITY caseExactIA5Match SYNTAX 1.3.6.1.4.1.1466.115
     .121.1.26 )
    olcObjectClasses: {0}( 1.3.6.1.4.1.15953.9.2.1 NAME 'sudoRole' DESC 'Sudoer En
     tries' SUP top STRUCTURAL MUST cn MAY ( sudoUser $ sudoHost $ sudoCommand $ s
     udoRunAs $ sudoOption $ description ) )
    structuralObjectClass: olcSchemaConfig
    entryUUID: 13557a62-bda0-102f-9364-0b01757f31d8
    creatorsName: cn=config
    createTimestamp: 20110126135819Z
    entryCSN: 20110126135819.712350Z#000000#000#000000
    modifiersName: cn=config
    modifyTimestamp: 20110126135819Z
    EOF
  3. Make initial files for base, group, people and sudoers

    base.ldif
    dn: dc=johnalvero,dc=com
    dc: johnalvero
    objectClass: top
    objectClass: domain
    
    dn: ou=People,dc=johnalvero,dc=com
    ou: People
    objectClass: top
    objectClass: organizationalUnit
    
    dn: ou=Group,dc=johnalvero,dc=com
    ou: Group
    objectClass: top
    objectClass: organizationalUnit
    newgroup.ldif
    dn: cn=phstaff,ou=Group,dc=johnalvero,dc=com
    objectClass: posixGroup
    objectClass: top
    cn: phstaff
    userPassword: {crypt}x
    gidNumber: 1000
    newpeople.ldif
    dn: uid=john,ou=People,dc=johnalvero,dc=com
    uid: john
    cn: John Alvero
    objectClass: account
    objectClass: posixAccount
    objectClass: top
    objectClass: shadowAccount
    objectClass: ldapPublicKey
    userPassword: {CRYPT}cr5y5J6F67Ci2
    shadowLastChange: 15140
    shadowMin: 0
    shadowMax: 99999
    shadowWarning: 7
    loginShell: /bin/bash
    uidNumber: 1000
    gidNumber: 1000
    homeDirectory: /home/john
    sshPublicKey: myrsakeyhere_changeme
    newsudoers.ldif
    dn: ou=sudoers,dc=johnalvero,dc=com
    objectclass: organizationalUnit
    ou: sudoers
    
    dn: cn=defaults,ou=sudoers,dc=johnalvero,dc=com
    objectClass: top
    objectClass: sudoRole
    cn: defaults
    description: Default sudoOption's go here
    sudoOption: logfile=/var/log/sudolog
    
    dn: cn=root,ou=sudoers,dc=johnalvero,dc=com
    objectClass: top
    objectClass: sudoRole
    cn: root
    sudoUser: root
    sudoHost: ALL
    sudoCommand: ALL
    
    # Sample sudo user
    dn: cn=john,ou=sudoers,dc=johnalvero,dc=com
    objectClass: top
    objectClass: sudoRole
    cn: john
    sudoUser: john
    sudoHost: ALL
    sudoCommand: ALL
    sudoOption: !authenticate
  4. We can now start the services and add the entries:
    chkconfig slapd on
    service slapd start
    ldapadd -x -W -D "cn=Manager,dc=johnalvero,dc=com" -f base.ldif
    ldapadd -x -W -D "cn=Manager,dc=johnalvero,dc=com" -f newgroup.ldif
    ldapadd -x -W -D "cn=Manager,dc=johnalvero,dc=com" -f newpeople.ldif
    ldapadd -x -W -D "cn=Manager,dc=johnalvero,dc=com" -f newsudoers.ldif
  5. And try searching
    ldapsearch -x -b "dc=johnalvero,dc=com"
    ldapsearch -H "ldap://johnalvero.com:389" -x -b "dc=johnalvero,dc=com"

Configuring ssh-lpk Clients

  • Install the packages
    yum install openssh-ldap nss-pam-ldapd
  • Setup LDAP config. This will modify various LDAP files including that of PAM
    authconfig --disablenis --enablemkhomedir --enableshadow --enablelocauthorize --enableldap --ldapserver=johnalvero.com --enablemd5 --ldapbasedn=dc=johnalvero,dc=com --updateall
     
    # Or, you can use a curses-based application. Enable necessary options based on the above command but --enablemkhomedir is not available in authconfig-tui 
     
    authconfig-tui
  • Allow SSH public-key login
    cat <<'EOF'> /etc/ssh/ldap.conf
    uri ldap://johnalvero.com/
    base dc=johnalvero,dc=com
    ssl no
    EOF
     
    cat <<'EOF'>> /etc/ssh/sshd_config
    AuthorizedKeysCommand /usr/libexec/openssh/ssh-ldap-wrapper
    AuthorizedKeysCommandRunAs nobody
    EOF
  • Tell system to lookup sudoers info from ldap or files respectively
    echo 'sudoers: ldap files' >> /etc/nsswitch.conf
     
    cat <<'EOF'>> /etc/nslcd.conf
    ou=sudoers,dc=johnalvero,dc=com
    sudoers_base ou=sudoers,dc=johnalvero,dc=com
    EOF
  • Restart sshd
    service sshd restart

nslcd start/restart hack

Since, Alami's nss-pam-ldapd suffers from the same bug described in https://bugzilla.redhat.com/show_bug.cgi?id=760843. I have made a patch for /etc/init.d/nslcd. This will make nss-pam-ldapd play nicely with sudo. Essentially, what is does is comment out “sudo-ldap”-related config in /etc/nslcd.conf just before starting the daemon and uncommenting these configs right after.

If you dont apply this patch, you will get errors in restarting/starting nslcd.

There's another option though, instead of installing nss-pam-ldapd from the default amzn-main repo, you can install the one in http://danielhall.me/shared/rpms/nss-pam-ldapd/ and forget about this patch.
*** /etc/init.d/nslcd 2012-03-30 13:42:53.859493505 +0800
--- /root/nslcd 2012-03-30 13:28:08.120237533 +0800
***************
*** 29,35 ****
--- 29,39 ----
 
  start() {
      echo -n $"Starting $prog: "
+     sed -i 's/^ou/#ou/' /etc/nslcd.conf
+     sed -i 's/^sudoers_base/#sudoers_base/' /etc/nslcd.conf
      daemon $program
+     sed -i 's/^#ou/ou/' /etc/nslcd.conf
+     sed -i 's/#sudoers_base/sudoers_base/' /etc/nslcd.conf
      RETVAL=$?
      echo
      [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
and then, patch by:
cd /etc/init.d/
patch -i /path/to/patch/nslcd.patch

MySQL Cluster 7.2 with MySQL Cluster Management (MCM)

This guide describes the step-by-step procedure on setting up a test MySQL Cluster using the MySQL Cluster Management Console (MCM).

Architecture

  • A total of four physical or virtual servers (known as Cluster Nodes in MySQL Cluster term)
    • Two cluster nodes will serve as Data Node (ndb, this is where our data reside)
    • Two other servers will serve as both SQL Nodes (mysqld) and Management Nodes (ndb_mgmd)
  • All four servers will only need to have the MySQL Cluster Management Agent installed
  • Each Data Node will have a pair of ndbd processs to maintain the replica assigned to it
  • Clients (PHP web application) will connect to SQL Nodes (mysqld)
  • This setup is supposed to survive a single Cluster Node failure
     

Requirements

  • VMware workstation, Hyper-V, Xen or even Amazon AWS
  • Four server instances
  • Each instance should have at least 1GB RAM (although Management Nodes/API Nodes can have lesser RAM. more about it during the steps)
  • CentOS 6.2
  • SELinux disabled
  • iptables disabled
  • MySQL Cluster Manager 1.1.4+Cluster for Red Hat and Oracle Linux 5 x86 (64-bit) - from oracle edelivery site. This package includes the MCM Agent and MySQL Cluster software

Installing the MCM Agent with MySQL Cluster

All four server instances should have this management agent. This is the only manual process that needs to be done on individual nodes. All other activities can be done through the MCM commandline console
  • for configuration simplicity, register all nodes in the hosts file
    cat <<EOF>> /etc/hosts
    192.168.0.10 site1
    192.168.0.11 site2
    192.168.0.12 site3
    192.168.0.13 site4
    EOF
  • copy the MCM Agent to /tmp
  • prepare the MCM agent files
    cd /tmp
    unzip V31807-01-MCM-Cluster.zip
    mkdir /opt/mcm
    tar xvz --directory=/opt/mcm/ --strip-components=1 -f mcm-1.1.5_64-cluster-7.2.5_64-linux-rhel5-x86.tar.gz
  • add users and fix directory permissions
    groupadd clustermanager && useradd -M -d /opt/mcm/ -g clustermanager clustermanager
    chown -R clustermanager.clustermanager /opt/mcm/
  • start the MCM daemon
    sudo -u clustermanager /opt/mcm/bin/mcmd &
At this point, we are done with the rest of the Cluster Node instances, all of the steps from this point forward can be done at the first or second server instances (the management servers).

Firing the First Cluster

We are now ready to create our first cluster. The main steps are: create a site, add a package, create a cluster and finally start the cluster.
  • connect to MCM command-line console. The default password is super
    /opt/mcm/cluster/bin/mysql -h127.0.0.1 -P1862 -uadmin -psuper --prompt='mcm> '
  • create a site
    mcm> create site --hosts=site1,site2,site3,site4 mysite;
  • create a package. A package is like a MySQL instance composed of MySQL binaries, libraries and configuration files. The name of the package we are going to create is 7.2
    mcm> add package --basedir=/opt/mcm/cluster 7.2;
  • create a cluster
    ndb_mgmd - Cluster management node on site1 & site2
    ndbd - Single threaded Data node on site3 & site4 (twice. Each machine hold couple of data nodes for our demo)
    mysqld - MySQL interface node on site1 & site2
    ndbapi - for API interface
    ndbmtd - for the multi-threaded NDB engine
    mcm> create cluster --package=7.2 [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected] mycluster;
  • if you do not have 1GB RAM for Data Node instances, you may need to modify innodb_buffer_pool_size here so that MySQL will start. This is also the right time to make other MySQL tuning
    get -d innodb_buffer_pool_size:mysqld mycluster;
    
    # This enables me to run cluster nodes with only 2GB RAM for testing purposes
    set innodb_buffer_pool_size:mysqld:51=16777216 mycluster;
    set innodb_buffer_pool_size:mysqld:52=16777216 mycluster;
    
    # Do this of you plan on storing large datasets
    set DataMemory:ndbd=3145728000 mycluster;
    set IndexMemory:ndbd=536870912 mycluster;
    
    
  • Start the cluster
    mcm> start cluster -B mycluster;
  • See the status of the cluster
    mcm> show status -r mycluster;
  • Connecting through MySQL Client
    mkdir  /var/lib/mysql/
    ln -s /tmp/mysql.mycluster.51.sock /var/lib/mysql/mysql.sock
    mysql -uroot
    

Other tasks

  • Changing from a single-threaded cluster node to multi-threaded
    mcm> change process ndbd:3=ndbmtd mycluster;
  • You don't normally need to manually do a rolling restart since MySQL cluster will take care of it if you make changes that requires a restart. But if you need it, here's how it's done
    mcm> restart cluster -B mycluster;
  • Here's how to do an online upgrade of cluster software. We call the new package as 7.3
    mcm> add package --basedir=/usr/local/mysql_7_3 7.3;
    mcm> upgrade cluster --package=7.3 mycluster;
  • Adding new hosts



    # Initialize the new hosts. Also take note that you need to add necessary entries in /etc/hosts for the new hosts
     
    mcm> add hosts --hosts=site5,site6 mysite;
    mcm> add package --basedir=/opt/mcm/cluster --hosts=site5,site6 7.2;
     
    # Finally, add it to the cluster.
    # Note that the we are also adding API instances on site1 and site2. Also, as pointed out by Andrew Morgan, we have to guess the node-id's of the the new mysqld's. In our case, the will be node-id's are 53 and 54 following the output in show statur -r mycluster
    mcm> add process [email protected],[email protected],[email protected],[email protected],[email protected],[email protected] -s port:mysqld:53=3307,port:mysqld:54=3307 mycluster;
    mcm> start process --added mycluster; 
     
    # On any of the API servers, do the following commands to repartition the 
    # existing cluster and use the new data nodes
     
    mysql> ALTER ONLINE TABLE [table-name] REORGANIZE PARTITION;
    mysql> OPTIMIZE TABLE [table-name];

Deleting the cluster

stop cluster -B mycluster;
 
delete cluster mycluster;
delete package 7.2;
delete site mysite;

other useful commands

list clusters mysite;
list packages mysite;
list sites;


Credits to Andrew Morgan for the write-up and images.

AWS Autoscaling How To

  • Setup autoscaling and cloudwatch CLI
    cd /home/john && mkdir ec2 && cd ec2
     
    wget http://ec2-downloads.s3.amazonaws.com/AutoScaling-2011-01-01.zip
    unzip AutoScaling-2011-01-01.zip
    wget http://ec2-downloads.s3.amazonaws.com/CloudWatch-2010-08-01.zip
    unzip CloudWatch-2010-08-01.zip
     
    export EC2_HOME=/home/john/ec2
    export PATH=$PATH:$EC2_HOME/bin  
    export JAVA_HOME=/usr
    export EC2_PRIVATE_KEY=/home/john/pk.pem # You need to get this file from your AWS Credentials
    export EC2_CERT=/home/john/cert.pem      # You need to get this file from your AWS Credentials
     
    export AWS_AUTO_SCALING_HOME=$EC2_HOME/AutoScaling-1.0.49.1
    export AWS_AUTO_SCALING_URL=https://autoscaling.us-east-1.amazonaws.com
    export PATH=$PATH:$AWS_AUTO_SCALING_HOME/bin
     
    export AWS_CLOUDWATCH_HOME=$EC2_HOME/CloudWatch-1.0.12.1
    export PATH=$PATH:$AWS_CLOUDWATCH_HOME/bin
  • Setup variables
    EC2_REGION="us-east-1"
    ZONE="us-east-1d"
    SECURITY_GROUP="default"
    INSTANCE_SIZE="t1.micro"
    LB_NAME="autoscalelb"
    LC_NAME="autoscalelc"
    LC_IMAGE_ID="ami-31814f58" # Could be any AMI of choice
    LC_KEY="john-east"  # You need to create this key in the AWS console
    SG_NAME="autoscalesg"
     
    UP_POLICY_NAME="MyScaleUpPolicy"
    DOWN_POLICY_NAME="MyScaleDownPolicy"
    HIGH_CPU_ALRM_NAME="MyHighCPUAlarm"
    LOW_CPU_ALRM_NAME="MyLowCPUAlarm"
    MIN_SIZE=1
    MAX_SIZE=4  # For testing purposes, set to 1
    DOWN_THRESHOLD=40  # scale down when average CPU load is 40% or below 
    UP_THRESHOLD=80  # scale up when average CPU load reaches 80%
  • Create Launch Config
    as-create-launch-config $LC_NAME --image-id $LC_IMAGE_ID --instance-type $INSTANCE_SIZE --group $SECURITY_GROUP --key $LC_KEY --block-device-mapping '/dev/sda2=ephemeral0' --user-data-file ud.txt
  • Create Autoscaling Group
    as-create-auto-scaling-group $SG_NAME --availability-zones $ZONE --launch-configuration $LC_NAME --min-size $MIN_SIZE --max-size $MAX_SIZE --load-balancers $LB_NAME
  • Trigger scaling up
    ARN_HIGH=`as-put-scaling-policy $UP_POLICY_NAME --auto-scaling-group $SG_NAME --adjustment=1 --type ChangeInCapacity --cooldown 300`
    mon-put-metric-alarm $HIGH_CPU_ALRM_NAME --comparison-operator GreaterThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold $UP_THRESHOLD --alarm-actions $ARN_HIGH --dimensions "AutoScalingGroupName=$SG_NAME"
  • Trigger scaling down
    ARN_LOW=`as-put-scaling-policy $DOWN_POLICY_NAME --auto-scaling-group $SG_NAME --adjustment=-1 --type ChangeInCapacity --cooldown 300`
    mon-put-metric-alarm $LOW_CPU_ALRM_NAME --comparison-operator LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold $DOWN_THRESHOLD --alarm-actions $ARN_LOW --dimensions "AutoScalingGroupName=$SG_NAME"
    
    #Post notifications to SNS (needed for dynamic registration)
    as-put-notification-configuration $SG_NAME --topic-arn arn:aws:sns:us-east-1:123456789012:topic01 --notification-types autoscaling:EC2_INSTANCE_LAUNCH, autoscaling:EC2_INSTANCE_TERMINATE
    
  • Pausing and Restarting autoscaling activities
    as-suspend-processes $SG_NAME
    as-resume-processes $SG_NAME
  • Expand to other Availability Zones
    as-update-auto-scaling-group $SG_NAME --availability-zones us-east-1a, us-east-1b, us-east-1c --min-size 3
    elb-describe-instance-health  $LB_NAME
    elb-enable-zones-for-lb  $LB_NAME  --headers --availability-zones us-east-1c 
  • Clean up
    as-update-auto-scaling-group $SG_NAME --min-size 0 --max-size 0
    as-delete-auto-scaling-group $SG_NAME
    as-delete-launch-config $LC_NAME
     
    mon-delete-alarms $HIGH_CPU_ALRM_NAME $LOW_CPU_ALRM_NAME

References

http://docs.amazonwebservices.com/AutoScaling/latest/DeveloperGuide/US_SetUpASLBApp.html

Apache + PHP-FPM + mod_fastcgi

OS: ALAMI 2011.09

  1. Install pre-req
    yum -y install make libtool httpd-devel apr-devel apr
  2. Install Apache and PHP-FPM
    yum -y install httpd php-fpm php-cli
  3. Install mod_fastcgi
    mkdir /root/files ; cd /root/files
    wget http://www.fastcgi.com/dist/mod_fastcgi-current.tar.gz
    tar -zxvf mod_fastcgi-current.tar.gz
    cd mod_fastcgi-2.4.6/
    cp Makefile.AP2 Makefile
    make top_dir=/usr/lib/httpd
    make install top_dir=/usr/lib/httpd
  4. Setup fastcgi folder
    mkdir /var/www/fcgi-bin
    cp $(which php-cgi) /var/www/fcgi-bin/
    chown -R apache: /var/www/fcgi-bin
    chmod -R 755 /var/www/fcgi-bin
  5. Load the module and setup php handler in /etc/httpd/conf.d/php-fpm.conf
    LoadModule fastcgi_module modules/mod_fastcgi.so
    LoadModule actions_module modules/mod_actions.so
     
    <IfModule mod_fastcgi.c>
            ScriptAlias /fcgi-bin/ "/var/www/fcgi-bin/"
            FastCGIExternalServer /var/www/fcgi-bin/php-cgi -host 127.0.0.1:9000 -pass-header Authorization
            AddHandler php-fastcgi .php
            Action php-fastcgi /fcgi-bin/php-cgi
    </IfModule>
  6. Start the servers
    chkconfig php-fpm on
    chkconfig httpd on
    service php-fpm start
    service httpd start

Scalr + Ubuntu 11.10 Installation

Environment

OSUbuntu Server 11.10 Oneiric Ocelot
Scalr Ver.scalr-2.5.r6086
Application Folder/var/www/app
Application VHostscalr.local

Installation

  1. Install required packages
    apt-get install apache2-mpm-prefork php5 php5-mysql php5-curl php5-mcrypt php5-snmp php-pear rrdtool librrd-dev libcurl4-openssl-dev mysql-server snmp libssh2-php apparmor-utils
  2. Unpack scalr application files. This assumes that the scalr package is at /tmp folder
    cd /tmp
    tar xvfz scalr-2.5.r6086.tar.gz
    mv scalr-2.5.r6086/app /var/www
    chown root.www-data /var/www/app -R
    chmod g+w /var/www/app/etc/.cryptokey
    chmod g+w /var/www/app/cache -R
  3. Before we proceed. Let's fix some code. This will resolve Bind DNS issues. Comment out the following code in /var/www/app/src/Scalr/Net/Dns/Bind/RemoteBind.php (line 36-37). The commented code should look like:
    //                        if (count($this->zonesConfig) == 0)
    //                                throw new Exception("Zones config is empty");
  4. Setup MySQL
    mysql -uroot -p -e 'create database scalr; grant all on scalr.* to [email protected] identified by "<scalrpassword>";flush privileges;'
     
    cat /tmp/scalr-2.5.r6086/sql/structure.sql | mysql -uscalr -p scalr
    cat /tmp/scalr-2.5.r6086/sql/data.sql | mysql -uscalr -p scalr
  5. Tell scalr how to connect to MySQL by modifying /var/www/app/etc/config.ini. The [db] part of that file should look similar to:
    driver=mysqli
    host = "localhost"
    name = "scalr"
    user = "scalr"
    pass = "<scalrpassword>"

    Note: The pass parameter should reflect the same password stated in the previous step (step 4)
  6. Setup and enable the Apache VHost
    cat <<EOF> /etc/apache2/sites-available/scalr 
    <VirtualHost *:80>
            ServerAdmin [email protected]
            ServerName scalr.local
            DocumentRoot /var/www/app/www
            <Directory /var/www/app/www>
                    Options Indexes FollowSymLinks MultiViews
                    AllowOverride All
            </Directory>
    </VirtualHost>
    EOF
     
    a2ensite scalr
  7. Install additional PHP modules
    pecl install rrd
    echo 'extension=rrd.so' >  /etc/php5/apache2/conf.d/rrd.ini
    pecl install pecl_http
    echo 'extension=http.so' >  /etc/php5/apache2/conf.d/http.ini
    a2enmod rewrite
    service apache2 restart
  8. At this point, we can now check if our environment has all the Apache and PHP modules required to run scalr. Open and point your browswer to http://scalr.local/testenvironment.php. Note that scalr.local is a local domain so make necessary changes in your own DNS resolvers or your workstations /etc/hosts.
  9. Cron jobs
    cat <<EOF> /etc/cron.d/scalr 
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --Poller
    * * * * *  root /usr/bin/php -q /var/www/app/cron/cron.php --Scheduler
    */10 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --MySQLMaintenance
    * * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --DNSManagerPoll
    17 5 * * * root /usr/bin/php -q  /var/www/app/cron/cron.php --RotateLogs
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --EBSManager
    */20 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --RolesQueue
    */5 * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --DbMsrMaintenance
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --Scaling
    */5 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --DBQueueEvent
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --SzrMessaging
    */4 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --RDSMaintenance
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --BundleTasksManager
    * * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --ScalarizrMessaging
    * * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --MessagingQueue
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --DeployManager
    EOF
  10. Bind
    apt-get install bind9
    chmod g+w /etc/bind/named.conf
    echo 'include "/var/named/etc/namedb/client_zones/zones.include";' >> /etc/bind/named.conf
    mkdir -p /var/named/etc/namedb/client_zones
    chown root.bind /var/named/etc/namedb/client_zones
    chmod 2775 /var/named/etc/namedb/client_zones
     
    # New domains will go to this file
    echo ' ' > /var/named/etc/namedb/client_zones/zones.include
    chown root.bind /var/named/etc/namedb/client_zones/zones.include
    chmod g+w /var/named/etc/namedb/client_zones/zones.include
     
    # Put Bind in apparmor complain mode. This will allow Bind to include **zones.include** as mentioned above. May need to setup a more secure configuration
    aa-complain /usr/sbin/named
     
    # Restart
    service bind9 restart

Next Steps

  1. Login as Scalr Admin
    http://scalr.local
     
    Email: admin
    Password: admin
    Note: When logging in as admin, you may see an “Insufficient permissions” error message. I have no idea how to fix that, but you may ignore that error message.
  2. Change Admin password (upper right corner of the screen)
    admin->Profile
  3. Change Core settings
    Settings->Core settings
  4. Create a scalr user. Then login as that user to create your first server farm
    Accounts->Manage
  5. Create your first server farm as described in the Getting Started Guide