Redmine on EC2 Cloud using Alami 2012.03

Redmine is a flexible project management web application. Written using the Ruby on Rails framework, it is cross-platform and cross-database.

OS: Alami 2012.03

Install Procedure

  • Install packages
    yum install ruby-devel gcc-c++ openssl-devel httpd httpd-devel mysql-server  mysql-devel make ruby-rdoc libcurl-devel rubygem-rake
  • Rubygems. Version 1.6.2 is the current sweet spot. Using the latest version will result to “depreciated errors in apache error logs”. Using an older version will prevent you from installing bundle
    cd /tmp/
    tar xvfz rubygems-1.6.2.tgz
    cd rubygems-1.6.2
    ruby setup.rb
  • Passenger
    gem install passenger
  • Load the passenger apache module. Add the following config in /etc/httpd/conf/httpd.conf
    LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.11/ext/apache2/
    PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.11
    PassengerRuby /usr/bin/ruby
  • Setup apache vhost
    cat <<'EOF' > /etc/httpd/conf.d/redmine.conf
    <VirtualHost *:80>
     ServerName redmine.local
     DocumentRoot /var/www/redmine/public/
     <Directory "/var/www/redmine/public/">
      Options Indexes ExecCGI FollowSymLinks
      AllowOverride all
      Order allow,deny
      Allow from all
    Note that redmine.local is a local domain and is a manual entry in my workstation's /etc/hosts file. This is done for testing purposes only. For production systems, this may very well be a subdomain under you company's domain name e.g.
  • Download and extract redmine
    cd /tmp
    tar xvfz redmine-1.3.2.tar.gz
    mkdir /var/www/redmine
    cp -a redmine-1.3.2/* /var/www/redmine
    chown -R apache.apache /var/www/redmine && chmod -R 755 /var/www/redmine
    touch /var/www/redmine/log/production.log
    chown root.apache /var/www/redmine/log/production.log
    chmod 664 /var/www/redmine/log/production.log
  • Prep Gemfile dependency
    cat <<EOF> /var/www/redmine/Gemfile
    source "" 
    gem "rake", "0.8.3" 
    gem "rack", "1.1.0" 
    gem "i18n", "0.4.2" 
    gem "rubytree", "0.5.2", :require => "tree" 
    gem "RedCloth", "~>4.2.3", :require => "redcloth" # for CodeRay
    gem "mysql" 
    gem "coderay", "~>0.9.7" 
  • Bundle
    gem install bundler
    cd /var/www/redmine/
    bundle install
  • Move CGI files
    cd /var/www/redmine/public/
    mv dispatch.cgi.example dispatch.cgi
    mv dispatch.fcgi.example dispatch.fcgi
    mv dispatch.rb.example dispatch.rb
    mv htaccess.fcgi.example .htaccess
  • Set rails to production environment in /var/www/redmine/config/environment.rb
    ENV['RAILS_ENV'] ||= 'production'
  • Setup MySQL DB
    service mysqld start
    chkconfig mysqld on
    mysql -uroot -p -e 'create database redmine character set utf8; grant all on redmine.* to redmine@localhost identified by "my_passwd";flush privileges';
  • Setup redmine database connection
    mv /var/www/redmine/config/database.yml.example /var/www/redmine/config/database.yml
    vi /var/www/redmine/config/database.yml
    # In the production section, update username, password and other parameters accordingly like so:
      adapter: mysql
      database: redmine
      host: localhost
      username: redmine
      password: my_passwd
      encoding: utf8
  • Create session store
    cd /var/www/redmine
    RAILS_ENV=production bundle exec rake generate_session_store
  • Migrate database models
    RAILS_ENV=production bundle exec rake db:migrate
  • Load MySQL database schema and default data
    RAILS_ENV=production bundle exec rake redmine:load_default_data
  • and finally, start Apache
    service httpd start
    chkconfig httpd on 
  • you may now open and point your browser to http://redmine.local and login as admin/admin

LDAP Server Installation for openssh-lpk clients

Since OpenLDAP version 2.3, configuration through cn=config is supported. It is also known as run-time configuration (RTC) or zero downtime configuration.

In accomplishing this task, we will use a cn=config type of configuration since by default, Amazon's Official Linux Ami (ALAMI 2012.03) uses this type.

OS: Alami 2012.03 / CentOS 6.2


  • Centralize the administration of linux accounts
  • Centralize the administration of sudo access
  • Use public keys

OpenLDAP Config

  1. Update the system. Fix timezone
    yum -y update
    echo -e "ZONE=Asia/Singapore\nUTC=false" > /etc/sysconfig/clock
    ln -sf /usr/share/zoneinfo/Asia/Singapore /etc/localtime
  2. Install LDAP packages
    yum install openldap-servers openldap-clients -y
  3. Generate the admin password
    $ slappasswd -s mysecret
    Note: mysecret will now be your Manager password. You will use this password to execute administrative commands. Displayed after is the corresponding hash. Use the hash in succeeding steps.
  4. TLS settings
    sed -i 's/dc=my-domain,dc=com/dc=johnalvero,dc=com/g' /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{2\}bdb.ldif
    # Also, add the password and TLS settings in the file
    cat <<'EOF'>> /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{2\}bdb.ldif
    olcRootPW: {SSHA}IwmKUosglAO6RpcjGDYm04HUu0VgWP0Y
    olcTLSCertificateFile: /etc/pki/tls/certs/slapdcert.pem
    olcTLSCertificateKeyFile: /etc/pki/tls/certs/slapdkey.pem
  5. Also add a password for “cn=admin,cn=config” user
    cat <<'EOF'>> /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{0\}config.ldif
    olcRootPW: {SSHA}IwmKUosglAO6RpcjGDYm04HUu0VgWP0Y
  6. Monitor configuration
    sed -i 's/cn=manager,dc=my-domain,dc=com/cn=Manager,dc=johnalvero,dc=com/g' /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{1\}monitor.ldif
  7. DB config
    cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
    chown -R ldap:ldap /var/lib/ldap/
  8. Generate SSL keys
    openssl req -new -x509 -nodes -out /etc/pki/tls/certs/slapdcert.pem -keyout /etc/pki/tls/certs/slapdkey.pem -days 365
    chown -Rf root.ldap /etc/pki/tls/certs/slapdcert.pem 
    chown -Rf root.ldap /etc/pki/tls/certs/slapdkey.pem


  1. Add openssh-lpk shema
    cat <<'EOF'> /etc/openldap/slapd.d/cn=config/cn=schema/cn={21}openssh-lpk.ldif
    dn: cn={21}openssh-lpk
    objectClass: olcSchemaConfig
    cn: {21}openssh-lpk
    olcAttributeTypes: {0}( NAME 'sshPublicKey' DES
     C 'MANDATORY: OpenSSH Public key' EQUALITY octetStringMatch SYNTAX
     1.1466. )
    olcObjectClasses: {0}( NAME 'ldapPublicKey' DESC
      'MANDATORY: OpenSSH LPK objectclass' SUP top AUXILIARY MAY ( sshPublicKey $ 
     uid ) )
    structuralObjectClass: olcSchemaConfig
    entryUUID: 135574f4-bda0-102f-9362-0b01757f31d8
    creatorsName: cn=config
    createTimestamp: 20110126135819Z
    entryCSN: 20110126135819.712350Z#000000#000#000000
    modifiersName: cn=config
    modifyTimestamp: 20110126135819Z
  2. Add the sudoers schema
    cat<<'EOF'> /etc/openldap/slapd.d/cn=config/cn=schema/cn={23}sudo.ldif
    dn: cn={23}sudo
    objectClass: olcSchemaConfig
    cn: {23}sudo
    olcAttributeTypes: {0}( NAME 'sudoUser' DESC 'User(s) 
     who may  run sudo' EQUALITY caseExactIA5Match SUBSTR caseExactIA5SubstringsMa
     tch SYNTAX )
    olcAttributeTypes: {1}( NAME 'sudoHost' DESC 'Host(s) 
     who may run sudo' EQUALITY caseExactIA5Match SUBSTR caseExactIA5SubstringsMat
     ch SYNTAX )
    olcAttributeTypes: {2}( NAME 'sudoCommand' DESC 'Comma
     nd(s) to be executed by sudo' EQUALITY caseExactIA5Match SYNTAX
     466. )
    olcAttributeTypes: {3}( NAME 'sudoRunAs' DESC 'User(s)
      impersonated by sudo' EQUALITY caseExactIA5Match SYNTAX
     .121.1.26 )
    olcAttributeTypes: {4}( NAME 'sudoOption' DESC 'Option
     s(s) followed by sudo' EQUALITY caseExactIA5Match SYNTAX
     .121.1.26 )
    olcObjectClasses: {0}( NAME 'sudoRole' DESC 'Sudoer En
     tries' SUP top STRUCTURAL MUST cn MAY ( sudoUser $ sudoHost $ sudoCommand $ s
     udoRunAs $ sudoOption $ description ) )
    structuralObjectClass: olcSchemaConfig
    entryUUID: 13557a62-bda0-102f-9364-0b01757f31d8
    creatorsName: cn=config
    createTimestamp: 20110126135819Z
    entryCSN: 20110126135819.712350Z#000000#000#000000
    modifiersName: cn=config
    modifyTimestamp: 20110126135819Z
  3. Make initial files for base, group, people and sudoers

    dn: dc=johnalvero,dc=com
    dc: johnalvero
    objectClass: top
    objectClass: domain
    dn: ou=People,dc=johnalvero,dc=com
    ou: People
    objectClass: top
    objectClass: organizationalUnit
    dn: ou=Group,dc=johnalvero,dc=com
    ou: Group
    objectClass: top
    objectClass: organizationalUnit
    dn: cn=phstaff,ou=Group,dc=johnalvero,dc=com
    objectClass: posixGroup
    objectClass: top
    cn: phstaff
    userPassword: {crypt}x
    gidNumber: 1000
    dn: uid=john,ou=People,dc=johnalvero,dc=com
    uid: john
    cn: John Alvero
    objectClass: account
    objectClass: posixAccount
    objectClass: top
    objectClass: shadowAccount
    objectClass: ldapPublicKey
    userPassword: {CRYPT}cr5y5J6F67Ci2
    shadowLastChange: 15140
    shadowMin: 0
    shadowMax: 99999
    shadowWarning: 7
    loginShell: /bin/bash
    uidNumber: 1000
    gidNumber: 1000
    homeDirectory: /home/john
    sshPublicKey: myrsakeyhere_changeme
    dn: ou=sudoers,dc=johnalvero,dc=com
    objectclass: organizationalUnit
    ou: sudoers
    dn: cn=defaults,ou=sudoers,dc=johnalvero,dc=com
    objectClass: top
    objectClass: sudoRole
    cn: defaults
    description: Default sudoOption's go here
    sudoOption: logfile=/var/log/sudolog
    dn: cn=root,ou=sudoers,dc=johnalvero,dc=com
    objectClass: top
    objectClass: sudoRole
    cn: root
    sudoUser: root
    sudoHost: ALL
    sudoCommand: ALL
    # Sample sudo user
    dn: cn=john,ou=sudoers,dc=johnalvero,dc=com
    objectClass: top
    objectClass: sudoRole
    cn: john
    sudoUser: john
    sudoHost: ALL
    sudoCommand: ALL
    sudoOption: !authenticate
  4. We can now start the services and add the entries:
    chkconfig slapd on
    service slapd start
    ldapadd -x -W -D "cn=Manager,dc=johnalvero,dc=com" -f base.ldif
    ldapadd -x -W -D "cn=Manager,dc=johnalvero,dc=com" -f newgroup.ldif
    ldapadd -x -W -D "cn=Manager,dc=johnalvero,dc=com" -f newpeople.ldif
    ldapadd -x -W -D "cn=Manager,dc=johnalvero,dc=com" -f newsudoers.ldif
  5. And try searching
    ldapsearch -x -b "dc=johnalvero,dc=com"
    ldapsearch -H "ldap://" -x -b "dc=johnalvero,dc=com"

Configuring ssh-lpk Clients

  • Install the packages
    yum install openssh-ldap nss-pam-ldapd
  • Setup LDAP config. This will modify various LDAP files including that of PAM
    authconfig --disablenis --enablemkhomedir --enableshadow --enablelocauthorize --enableldap --enablemd5 --ldapbasedn=dc=johnalvero,dc=com --updateall
    # Or, you can use a curses-based application. Enable necessary options based on the above command but --enablemkhomedir is not available in authconfig-tui 
  • Allow SSH public-key login
    cat <<'EOF'> /etc/ssh/ldap.conf
    uri ldap://
    base dc=johnalvero,dc=com
    ssl no
    cat <<'EOF'>> /etc/ssh/sshd_config
    AuthorizedKeysCommand /usr/libexec/openssh/ssh-ldap-wrapper
    AuthorizedKeysCommandRunAs nobody
  • Tell system to lookup sudoers info from ldap or files respectively
    echo 'sudoers: ldap files' >> /etc/nsswitch.conf
    cat <<'EOF'>> /etc/nslcd.conf
    sudoers_base ou=sudoers,dc=johnalvero,dc=com
  • Restart sshd
    service sshd restart

nslcd start/restart hack

Since, Alami's nss-pam-ldapd suffers from the same bug described in I have made a patch for /etc/init.d/nslcd. This will make nss-pam-ldapd play nicely with sudo. Essentially, what is does is comment out “sudo-ldap”-related config in /etc/nslcd.conf just before starting the daemon and uncommenting these configs right after.

If you dont apply this patch, you will get errors in restarting/starting nslcd.

There's another option though, instead of installing nss-pam-ldapd from the default amzn-main repo, you can install the one in and forget about this patch.
*** /etc/init.d/nslcd 2012-03-30 13:42:53.859493505 +0800
--- /root/nslcd 2012-03-30 13:28:08.120237533 +0800
*** 29,35 ****
--- 29,39 ----
  start() {
      echo -n $"Starting $prog: "
+     sed -i 's/^ou/#ou/' /etc/nslcd.conf
+     sed -i 's/^sudoers_base/#sudoers_base/' /etc/nslcd.conf
      daemon $program
+     sed -i 's/^#ou/ou/' /etc/nslcd.conf
+     sed -i 's/#sudoers_base/sudoers_base/' /etc/nslcd.conf
      [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
and then, patch by:
cd /etc/init.d/
patch -i /path/to/patch/nslcd.patch

MySQL Cluster 7.2 with MySQL Cluster Management (MCM)

This guide describes the step-by-step procedure on setting up a test MySQL Cluster using the MySQL Cluster Management Console (MCM).


  • A total of four physical or virtual servers (known as Cluster Nodes in MySQL Cluster term)
    • Two cluster nodes will serve as Data Node (ndb, this is where our data reside)
    • Two other servers will serve as both SQL Nodes (mysqld) and Management Nodes (ndb_mgmd)
  • All four servers will only need to have the MySQL Cluster Management Agent installed
  • Each Data Node will have a pair of ndbd processs to maintain the replica assigned to it
  • Clients (PHP web application) will connect to SQL Nodes (mysqld)
  • This setup is supposed to survive a single Cluster Node failure


  • VMware workstation, Hyper-V, Xen or even Amazon AWS
  • Four server instances
  • Each instance should have at least 1GB RAM (although Management Nodes/API Nodes can have lesser RAM. more about it during the steps)
  • CentOS 6.2
  • SELinux disabled
  • iptables disabled
  • MySQL Cluster Manager 1.1.4+Cluster for Red Hat and Oracle Linux 5 x86 (64-bit) - from oracle edelivery site. This package includes the MCM Agent and MySQL Cluster software

Installing the MCM Agent with MySQL Cluster

All four server instances should have this management agent. This is the only manual process that needs to be done on individual nodes. All other activities can be done through the MCM commandline console
  • for configuration simplicity, register all nodes in the hosts file
    cat <<EOF>> /etc/hosts site1 site2 site3 site4
  • copy the MCM Agent to /tmp
  • prepare the MCM agent files
    cd /tmp
    mkdir /opt/mcm
    tar xvz --directory=/opt/mcm/ --strip-components=1 -f mcm-1.1.5_64-cluster-7.2.5_64-linux-rhel5-x86.tar.gz
  • add users and fix directory permissions
    groupadd clustermanager && useradd -M -d /opt/mcm/ -g clustermanager clustermanager
    chown -R clustermanager.clustermanager /opt/mcm/
  • start the MCM daemon
    sudo -u clustermanager /opt/mcm/bin/mcmd &
At this point, we are done with the rest of the Cluster Node instances, all of the steps from this point forward can be done at the first or second server instances (the management servers).

Firing the First Cluster

We are now ready to create our first cluster. The main steps are: create a site, add a package, create a cluster and finally start the cluster.
  • connect to MCM command-line console. The default password is super
    /opt/mcm/cluster/bin/mysql -h127.0.0.1 -P1862 -uadmin -psuper --prompt='mcm> '
  • create a site
    mcm> create site --hosts=site1,site2,site3,site4 mysite;
  • create a package. A package is like a MySQL instance composed of MySQL binaries, libraries and configuration files. The name of the package we are going to create is 7.2
    mcm> add package --basedir=/opt/mcm/cluster 7.2;
  • create a cluster
    ndb_mgmd - Cluster management node on site1 & site2
    ndbd - Single threaded Data node on site3 & site4 (twice. Each machine hold couple of data nodes for our demo)
    mysqld - MySQL interface node on site1 & site2
    ndbapi - for API interface
    ndbmtd - for the multi-threaded NDB engine
    mcm> create cluster --package=7.2 --processhosts=ndb_mgmd@site1,ndb_mgmd@site2,ndbd@site3,ndbd@site4,ndbd@site3,ndbd@site4,mysqld@site1,mysqld@site2 mycluster;
  • if you do not have 1GB RAM for Data Node instances, you may need to modify innodb_buffer_pool_size here so that MySQL will start. This is also the right time to make other MySQL tuning
    get -d innodb_buffer_pool_size:mysqld mycluster;
    # This enables me to run cluster nodes with only 2GB RAM for testing purposes
    set innodb_buffer_pool_size:mysqld:51=16777216 mycluster;
    set innodb_buffer_pool_size:mysqld:52=16777216 mycluster;
    # Do this of you plan on storing large datasets
    set DataMemory:ndbd=3145728000 mycluster;
    set IndexMemory:ndbd=536870912 mycluster;
  • Start the cluster
    mcm> start cluster -B mycluster;
  • See the status of the cluster
    mcm> show status -r mycluster;
  • Connecting through MySQL Client
    mkdir  /var/lib/mysql/
    ln -s /tmp/mysql.mycluster.51.sock /var/lib/mysql/mysql.sock
    mysql -uroot

Other tasks

  • Changing from a single-threaded cluster node to multi-threaded
    mcm> change process ndbd:3=ndbmtd mycluster;
  • You don't normally need to manually do a rolling restart since MySQL cluster will take care of it if you make changes that requires a restart. But if you need it, here's how it's done
    mcm> restart cluster -B mycluster;
  • Here's how to do an online upgrade of cluster software. We call the new package as 7.3
    mcm> add package --basedir=/usr/local/mysql_7_3 7.3;
    mcm> upgrade cluster --package=7.3 mycluster;
  • Adding new hosts

    # Initialize the new hosts. Also take note that you need to add necessary entries in /etc/hosts for the new hosts
    mcm> add hosts --hosts=site5,site6 mysite;
    mcm> add package --basedir=/opt/mcm/cluster --hosts=site5,site6 7.2;
    # Finally, add it to the cluster.
    # Note that the we are also adding API instances on site1 and site2. Also, as pointed out by Andrew Morgan, we have to guess the node-id's of the the new mysqld's. In our case, the will be node-id's are 53 and 54 following the output in show statur -r mycluster
    mcm> add process --processhosts=mysqld@site1,mysqld@site2,ndbd@site5,ndbd@site6,ndbd@site5,ndbd@site6 -s port:mysqld:53=3307,port:mysqld:54=3307 mycluster;
    mcm> start process --added mycluster; 
    # On any of the API servers, do the following commands to repartition the 
    # existing cluster and use the new data nodes
    mysql> OPTIMIZE TABLE [table-name];

Deleting the cluster

stop cluster -B mycluster;
delete cluster mycluster;
delete package 7.2;
delete site mysite;

other useful commands

list clusters mysite;
list packages mysite;
list sites;

Credits to Andrew Morgan for the write-up and images.

AWS Autoscaling How To

  • Setup autoscaling and cloudwatch CLI
    cd /home/john && mkdir ec2 && cd ec2
    export EC2_HOME=/home/john/ec2
    export PATH=$PATH:$EC2_HOME/bin  
    export JAVA_HOME=/usr
    export EC2_PRIVATE_KEY=/home/john/pk.pem # You need to get this file from your AWS Credentials
    export EC2_CERT=/home/john/cert.pem      # You need to get this file from your AWS Credentials
    export AWS_AUTO_SCALING_HOME=$EC2_HOME/AutoScaling-
    export AWS_CLOUDWATCH_HOME=$EC2_HOME/CloudWatch-
  • Setup variables
    LC_IMAGE_ID="ami-31814f58" # Could be any AMI of choice
    LC_KEY="john-east"  # You need to create this key in the AWS console
    MAX_SIZE=4  # For testing purposes, set to 1
    DOWN_THRESHOLD=40  # scale down when average CPU load is 40% or below 
    UP_THRESHOLD=80  # scale up when average CPU load reaches 80%
  • Create Launch Config
    as-create-launch-config $LC_NAME --image-id $LC_IMAGE_ID --instance-type $INSTANCE_SIZE --group $SECURITY_GROUP --key $LC_KEY --block-device-mapping '/dev/sda2=ephemeral0' --user-data-file ud.txt
  • Create Autoscaling Group
    as-create-auto-scaling-group $SG_NAME --availability-zones $ZONE --launch-configuration $LC_NAME --min-size $MIN_SIZE --max-size $MAX_SIZE --load-balancers $LB_NAME
  • Trigger scaling up
    ARN_HIGH=`as-put-scaling-policy $UP_POLICY_NAME --auto-scaling-group $SG_NAME --adjustment=1 --type ChangeInCapacity --cooldown 300`
    mon-put-metric-alarm $HIGH_CPU_ALRM_NAME --comparison-operator GreaterThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold $UP_THRESHOLD --alarm-actions $ARN_HIGH --dimensions "AutoScalingGroupName=$SG_NAME"
  • Trigger scaling down
    ARN_LOW=`as-put-scaling-policy $DOWN_POLICY_NAME --auto-scaling-group $SG_NAME --adjustment=-1 --type ChangeInCapacity --cooldown 300`
    mon-put-metric-alarm $LOW_CPU_ALRM_NAME --comparison-operator LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold $DOWN_THRESHOLD --alarm-actions $ARN_LOW --dimensions "AutoScalingGroupName=$SG_NAME"
    #Post notifications to SNS (needed for dynamic registration)
    as-put-notification-configuration $SG_NAME --topic-arn arn:aws:sns:us-east-1:123456789012:topic01 --notification-types autoscaling:EC2_INSTANCE_LAUNCH, autoscaling:EC2_INSTANCE_TERMINATE
  • Pausing and Restarting autoscaling activities
    as-suspend-processes $SG_NAME
    as-resume-processes $SG_NAME
  • Expand to other Availability Zones
    as-update-auto-scaling-group $SG_NAME --availability-zones us-east-1a, us-east-1b, us-east-1c --min-size 3
    elb-describe-instance-health  $LB_NAME
    elb-enable-zones-for-lb  $LB_NAME  --headers --availability-zones us-east-1c 
  • Clean up
    as-update-auto-scaling-group $SG_NAME --min-size 0 --max-size 0
    as-delete-auto-scaling-group $SG_NAME
    as-delete-launch-config $LC_NAME
    mon-delete-alarms $HIGH_CPU_ALRM_NAME $LOW_CPU_ALRM_NAME


Apache + PHP-FPM + mod_fastcgi

OS: ALAMI 2011.09

  1. Install pre-req
    yum -y install make libtool httpd-devel apr-devel apr
  2. Install Apache and PHP-FPM
    yum -y install httpd php-fpm php-cli
  3. Install mod_fastcgi
    mkdir /root/files ; cd /root/files
    tar -zxvf mod_fastcgi-current.tar.gz
    cd mod_fastcgi-2.4.6/
    cp Makefile.AP2 Makefile
    make top_dir=/usr/lib/httpd
    make install top_dir=/usr/lib/httpd
  4. Setup fastcgi folder
    mkdir /var/www/fcgi-bin
    cp $(which php-cgi) /var/www/fcgi-bin/
    chown -R apache: /var/www/fcgi-bin
    chmod -R 755 /var/www/fcgi-bin
  5. Load the module and setup php handler in /etc/httpd/conf.d/php-fpm.conf
    LoadModule fastcgi_module modules/
    LoadModule actions_module modules/
    <IfModule mod_fastcgi.c>
            ScriptAlias /fcgi-bin/ "/var/www/fcgi-bin/"
            FastCGIExternalServer /var/www/fcgi-bin/php-cgi -host -pass-header Authorization
            AddHandler php-fastcgi .php
            Action php-fastcgi /fcgi-bin/php-cgi
  6. Start the servers
    chkconfig php-fpm on
    chkconfig httpd on
    service php-fpm start
    service httpd start

Scalr + Ubuntu 11.10 Installation


OSUbuntu Server 11.10 Oneiric Ocelot
Scalr Ver.scalr-2.5.r6086
Application Folder/var/www/app
Application VHostscalr.local


  1. Install required packages
    apt-get install apache2-mpm-prefork php5 php5-mysql php5-curl php5-mcrypt php5-snmp php-pear rrdtool librrd-dev libcurl4-openssl-dev mysql-server snmp libssh2-php apparmor-utils
  2. Unpack scalr application files. This assumes that the scalr package is at /tmp folder
    cd /tmp
    tar xvfz scalr-2.5.r6086.tar.gz
    mv scalr-2.5.r6086/app /var/www
    chown root.www-data /var/www/app -R
    chmod g+w /var/www/app/etc/.cryptokey
    chmod g+w /var/www/app/cache -R
  3. Before we proceed. Let's fix some code. This will resolve Bind DNS issues. Comment out the following code in /var/www/app/src/Scalr/Net/Dns/Bind/RemoteBind.php (line 36-37). The commented code should look like:
    //                        if (count($this->zonesConfig) == 0)
    //                                throw new Exception("Zones config is empty");
  4. Setup MySQL
    mysql -uroot -p -e 'create database scalr; grant all on scalr.* to scalr@localhost identified by "<scalrpassword>";flush privileges;'
    cat /tmp/scalr-2.5.r6086/sql/structure.sql | mysql -uscalr -p scalr
    cat /tmp/scalr-2.5.r6086/sql/data.sql | mysql -uscalr -p scalr
  5. Tell scalr how to connect to MySQL by modifying /var/www/app/etc/config.ini. The [db] part of that file should look similar to:
    host = "localhost"
    name = "scalr"
    user = "scalr"
    pass = "<scalrpassword>"

    Note: The pass parameter should reflect the same password stated in the previous step (step 4)
  6. Setup and enable the Apache VHost
    cat <<EOF> /etc/apache2/sites-available/scalr 
    <VirtualHost *:80>
            ServerAdmin webmaster@localhost
            ServerName scalr.local
            DocumentRoot /var/www/app/www
            <Directory /var/www/app/www>
                    Options Indexes FollowSymLinks MultiViews
                    AllowOverride All
    a2ensite scalr
  7. Install additional PHP modules
    pecl install rrd
    echo '' >  /etc/php5/apache2/conf.d/rrd.ini
    pecl install pecl_http
    echo '' >  /etc/php5/apache2/conf.d/http.ini
    a2enmod rewrite
    service apache2 restart
  8. At this point, we can now check if our environment has all the Apache and PHP modules required to run scalr. Open and point your browswer to http://scalr.local/testenvironment.php. Note that scalr.local is a local domain so make necessary changes in your own DNS resolvers or your workstations /etc/hosts.
  9. Cron jobs
    cat <<EOF> /etc/cron.d/scalr 
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --Poller
    * * * * *  root /usr/bin/php -q /var/www/app/cron/cron.php --Scheduler
    */10 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --MySQLMaintenance
    * * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --DNSManagerPoll
    17 5 * * * root /usr/bin/php -q  /var/www/app/cron/cron.php --RotateLogs
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --EBSManager
    */20 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --RolesQueue
    */5 * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --DbMsrMaintenance
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --Scaling
    */5 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --DBQueueEvent
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --SzrMessaging
    */4 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --RDSMaintenance
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron/cron.php --BundleTasksManager
    * * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --ScalarizrMessaging
    * * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --MessagingQueue
    */2 * * * *  root /usr/bin/php -q  /var/www/app/cron-ng/cron.php --DeployManager
  10. Bind
    apt-get install bind9
    chmod g+w /etc/bind/named.conf
    echo 'include "/var/named/etc/namedb/client_zones/zones.include";' >> /etc/bind/named.conf
    mkdir -p /var/named/etc/namedb/client_zones
    chown root.bind /var/named/etc/namedb/client_zones
    chmod 2775 /var/named/etc/namedb/client_zones
    # New domains will go to this file
    echo ' ' > /var/named/etc/namedb/client_zones/zones.include
    chown root.bind /var/named/etc/namedb/client_zones/zones.include
    chmod g+w /var/named/etc/namedb/client_zones/zones.include
    # Put Bind in apparmor complain mode. This will allow Bind to include **zones.include** as mentioned above. May need to setup a more secure configuration
    aa-complain /usr/sbin/named
    # Restart
    service bind9 restart

Next Steps

  1. Login as Scalr Admin
    Email: admin
    Password: admin
    Note: When logging in as admin, you may see an “Insufficient permissions” error message. I have no idea how to fix that, but you may ignore that error message.
  2. Change Admin password (upper right corner of the screen)
  3. Change Core settings
    Settings->Core settings
  4. Create a scalr user. Then login as that user to create your first server farm
  5. Create your first server farm as described in the Getting Started Guide