Advanced tasks at Eucalyptus


list of ec2 commands


virtual cluster scripts, documentation   is here: $VIRTUALCLUSTER_HOME/docs/README   
      skip: *on other systems, **Setting credentials*, where echo $VIRTUALCLUSTER_HOME 
/global/common/carver/tig/virtualcluster/0.2.2

  1. Log into carver.nersc.gov
    change shell to  bash  or python  will crash
  2. Load the Eucalyptus tools + virtual cluster tools, order of loading matters
    module load tig virtualcluster euca2ools  python/2.7.1  screen
  3.  setup system variables & your credentials stored in your 'eucarc' script:
    source ~/key-euca2-balewski-x509/eucarc     (for bash)
    make sure EUCA_KEY_DIR is set properly
  4.  create and format EBS volume:
    ec2-create-volume --size 5 --availability-zone euca
    VOLUME vol-82DB0796 5 euca creating 2010-12-17T20:24:45+0000
    1. check the volume is created
      euca-describe-volumes
      VOLUME vol-82DB0796 5 euca available 2010-12-17T20:24:45+0000
    2. Create an instance:  euca-run-instances -k balewski-euca emi-39FA160F  (Ubuntu) and check it runs:  euca-describe-instances | sort -k 4 | grep run 
    3.   STAR VM w/ SL10keuca-run-instances -k balewski-euca -t c1.xlarge emi-48080D8D
    4.  STAR VM w/ SL11aeuca-run-instances -k balewski-euca -t c1.xlarge emi-6F2A0E46
    5.  STAR VM w/ SL11beuca-run-instances -k balewski-euca -t c1.xlarge emi-FA4D10D5
    6.  STAR VM w/ SL11ceuca-run-instances -k balewski-euca -t c1.xlarge emi-6E5B0E5C --addressing private
    7. small instance   euca-run-instances -k balewski-euca  emi-1CF115B4    content: 1 core, 360 MB of disk space
    8. Attach EBS volume to this instance : euca-attach-volume -i  i-508B097C -d /dev/vdb vol-82DB0796
      euca-attach-volume -i <instance-id> -d /dev/vdb <volumeid>
    9. check attachment worked out: euca-describe-volumes  vol-830F07A0
      VOLUMEvol-830F07A0 145eucain-use2011-03-16T19:09:49.738Z
      ATTACHMENTvol-830F07A0i-46740817/dev/vdb2011-03-16T19:21:21.379Z
    10. ssh to this instance and format the EBS volume: ssh -i ~/key-euca2-balewski-x509/balewski-euca.private  root@128.55.70.203
    11. yes | mkfs -t ext3 /dev/vdb
      mkdir /apps
      mount /dev/vdb /apps
    12. terminate this instance : euca-terminate-instances    i-508B097C  
    13. to terminate all instances: euca-terminate-instances $(euca-describe-instances |grep i-|cut -f 2)
  5. re-mount already formatted EBS disk to a single node## start VM, attach volume## ssh to VM, do mkdir /apps;mount /dev/vdb /apps
  6. to mount 2nd EBS volume to the same machine you need it first format as above, next mount it with different: /dev/vdc & mount as /someName
  7. setup & deploy VM cluster using a common EBS volume
    1. Create a .conf on local machine and edit appropriately, following $VIRTUALCLUSTER_HOME/docs/sample-user.conf 
      cp  ...sample-user.conf  /global/u2/b/balewski/.cloud/nersc/user.conf , set properly : EBS_VOLUME_ID=vol-82DB0796
    2. export CLUSTER_CONF=/global/u2/b/balewski/.cloud/nersc/user.conf
    3.  launch your 3-nodes cluster, it will be named 'balewski-cent' , do: vc-launcher newCluster 3
    4. after many minutes check if # launched instances matches, oly head node will have good IP , do euca-describe-instances
    5. ssh to the head node, do  ssh -i ~/key-euca2-balewski-x509/balewski-euca.private root@128.55.56.49
    6. ssh to a worker node from the head node, do : ssh root@192.168.2.132
    7. verify the EBS disk is visible, do : cd /apps/; ls -l
  8. add nodes to existing cluster : vc-launcher addNodes 4
  1. terminate cluster, do : vc-launcher  terminateCluster balewski-cent
  2. List of local IPs:    cat ~/.cloud/nersc/machineFile | sort -u ,    global head IP: cat ~/.cloud/nersc/.vc-private-balewski-cent 
  3. Change type of VMs added to the cluster.## copy full config: cp /global/common/carver/tig/virtualcluster/0.2.2/conf/cluster/cluster.centos.conf /global/homes/b/balewski/.cloud/nersc##  redefine INSTANCE_TYPE=c1.xlarge, IMAGE_ID=emi-5B7B12EE
    1. your user.conf 
      1. remove the CLUSTER_TYPE line
      2. add CLUSTER_CONF=/global/homes/b/balewski/.cloud/nersc/cluster.centos.conf
    2. vc-launcher addNodes 4

Trouble shooting:

  1. version of python whould be at least 2.5, to test it type:
    which python
    /usr/bin/env python
  2. sssdes

Not tested instruction how to setup NFS on a worker node

For doing this manually, you can use the standard linux distribution instructions
 on how to do that. Here are some high-level instructions based on how the 
virtual cluster scripts does it for CentOS.
On the master

a) You will need an entry for each slave-partition you expect to serve to
 the worker nodes (You should see entries for /apps if you)
b) Then restart nfs

$  service nfs restart

On each worker
a) mkdir /data
b) mount <master IP>:/data /data

Web page monitoring deployed VMs , updated every 15 minuteshttp://portal.nersc.gov/project/magellan/eucalyptus/instances.txt

Can you explain the meaning of every line?
 node=c0501
node name, eucalyptus runs on nodes c0501 through c0540

mem=24404
/24148
memory total/free (in MB)

disk=182198/
181170
space available/free - will check tomorrow what units.
but I need to check what space is it actually listing here

cores=8/ 6
cores total/free

  • No labels