OS: CentOS 6.2 (simulation done in HPCloud instances)
Firewall: off
Gluster Setup: 2 replica on 4 hosts (similar raid10)

  • Install Gluster. To get the latest version, I used the one from the upstream
    yum install compat-readline5-devel -y
     
    rpm -Uvh http://download.gluster.com/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-core-3.2.6-1.x86_64.rpm
    rpm -Uvh http://download.gluster.com/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-fuse-3.2.6-1.x86_64.rpm
  • For configuration simplicty. Let's add resolveable names for each server. Do this on all servers
    cat <<'EOF'>> /etc/hosts
    10.4.63.229 site1
    10.4.63.222 site2
    10.4.63.242 site3
    10.4.63.243 site4
    EOF
  • Start glusterd on all hosts
    service glusterd start
  • Enter glusterd console. You can do this on any of the hosts. But for now, let's do it on the first host
    gluster
  • Let's check for other peers/hosts. Should display no connected peers since we are doing this for the first time
    peer status
  • Now, lets try to add our hosts to the trusted pool. We don't don't need to add the first host since we are also on it
    peer probe site2
    peer probe site3
    peer probe site4
     
    # To reverse the effect of peer probe
    peer detach host2
    The above command should reply with “Probe successful”, otherwise, check your logs for clues.
  • This is the part where the magic happens. It's time to create our volume. This volume setup is similar to Raid10
    volume create testvolrep replica 2 transport tcp site1:/exp1 site2:/exp2 site3:/exp3 site4:/exp4
    It's important to note that the parameter after the hostnames (site1,site2..) are actual directory names that will be auto-created by gluster in the respective hosts. To check for correct replication later, you can 'ls' the contents of these folders.
  • Apply basic security
    volume set testvolrep auth.allow 10.4.63.*
  • Start the volume
    volume start testvolrep
  • Show volume information
    gluster> volume info
     
    Volume Name: testvolrep
    Type: Distributed-Replicate
    Status: Started
    Number of Bricks: 2 x 2 = 4
    Transport-type: tcp
    Bricks:
    Brick1: site1:/exp1
    Brick2: site2:/exp2
    Brick3: site3:/exp3
    Brick4: site4:/exp4
    Options Reconfigured:
    auth.allow: 10.4.63.*
  • Mounting the volume. On any of the hosts, you can mount the created volume
    mount -t glusterfs site1:/testvolrep /mnt/x
     
    # Or put it in /etc/fstab
    site1:/testvolrep /mnt/x glusterfs defaults,_netdev 0 0
  • Adding more hosts. To add more hosts, you add in multiples of the number of replicas you created for that volume. In our case it's 2
    volume add-brick testvolrep site5:/exp5 site6:/exp6
     
    # It's a good idea to re-layout after adding hosts
    volume rebalance testvolrep fix-layout start
     
    # Or, if you want to rebalance data distribution
    volume rebalance testvolrep migrate-data start
  • Replacing a failed host. Say for example site6 has failed. We can replace it with another (site7)
    volume replace-brick testvolrep site6:/exp6 site7:/exp7 start
    volume replace-brick testvolrep site6:/exp6 site7:/exp7 status
    volume replace-brick testvolrep site6:/exp6 site7:/exp7 commit
     
    # Then, migrate your data
    volume rebalance testvolrep migrate-data start
     
    # You can also trigger a self-heal from FUSE mount point (on your clients)
    find /mnt/x -noleaf -print0 | xargs --null stat >/dev/null 
  • creating other types of volume
    # stripe
    volume create mystripevol1 stripe 2 transport tcp site1:/stripe1 site2:/stripe1
     
    # mirror
    volume create mymirrorvol1 replica 2 transport tcp site1:/rep1 site2:/rep2


Reference:
http://docs.redhat.com/docs/en-US/Red_Hat_Storage_Software_Appliance/3.2/html/User_Guide/index.html