You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Next »

Advanced tasks at Eucalyptus


list of ec2 commands


virtual cluster scripts, documentation   is here: $VIRTUALCLUSTER_HOME/docs/README   
      skip: *on other systems, **Setting credentials*, where echo $VIRTUALCLUSTER_HOME 
/global/common/carver/tig/virtualcluster/0.2.2

  1. Log into carver.nersc.gov
    change shell to bash or python 2.4  will crash
  2. Load the Eucalyptus tools + virtual cluster tools, order of loading matters
    module load tig virtualcluster euca2ools  
  3.  setup system variables & your credentials stored in your 'eucarc' script:
    source key-euca2-balewski-x509/eucarc  (for bash)
    make sure EUCA_KEY_DIR is set properly
  4.  create and format EBS volume:
    ec2-create-volume --size 5 --availability-zone euca
    VOLUME vol-82DB0796 5 euca creating 2010-12-17T20:24:45+0000
    1. check the volume is created
      euca-describe-volumes
      VOLUME vol-82DB0796 5 euca available 2010-12-17T20:24:45+0000
    2. Create an instance:  euca-run-instances -k balewski-euca emi-39FA160F  and check it runs:  euca-describe-instances
    3. Attach EBS volume to this instance : euca-attach-volume -i  i-508B097C -d /dev/vdb vol-82DB0796
      euca-attach-volume -i <instance-id> -d /dev/vdb <volumeid>
    4. ssh to this instance and format the EBS volume: ssh -i key-euca2-balewski-x509/balewski-euca.private  root@128.55.56.58
      yes | mkfs -t ext3 /dev/vdb
      mkdir /apps
      mount /dev/vdb /apps
    5. terminate this instance : euca-terminate-instances i-508B097C
  5. re-mount already formatted EBS disk to a single node## start VM, attach volume## ssh to VM, do mkdir /apps;mount /dev/vdb /apps
  6. to mount 2nd EBS volume to the same machine you need it first format as above, next mount it with different: /dev/vdc & mount as /someName
  7. setup & deploy VM cluster using a common EBS volume
    1. Create a .conf on local machine and edit appropriately, following $VIRTUALCLUSTER_HOME/docs/sample-user.conf 
      cp  ...sample-user.conf  key-euca2-balewski-x509/virtualClusterA.conf , set properly : EBS_VOLUME_ID=vol-82DB0796
    2. export CLUSTER_CONF=/global/u2/b/balewski/.cloud/nersc/user.conf
    3.  launch your 3-nodes cluster, it will be named 'balewski-cent' , do: vc-launcher newCluster 3
    4. after many minutes check if # launched instances matches, oly head node will have good IP , do euca-describe-instances
    5. ssh to the head node, do  ssh -i key-euca2-balewski-x509/balewski-euca.private root@128.55.56.49
    6. ssh to a worker node from the head node, do : ssh root@192.168.2.132
    7. verify the EBS disk is visible, do : cd /apps/; ls -l
  8. terminate cluster, do : vc-launcher  terminateCluster balewski-cent
  9. List of local IPs: cat ~/.cloud/nersc/machineFile , global head IP: cat ~/.cloud/nersc/.vc-private-balewski-cent

Trouble shooting:

  1. version of python whould be at least 2.5, to test it type:
    which python
    /usr/bin/env python
  2. sssdes

Not tested instruction how to setup NFS on a worker node

For doing this manually, you can use the standard linux distribution instructions
 on how to do that. Here are some high-level instructions based on how the 
virtual cluster scripts does it for CentOS.
On the master

a) You will need an entry for each slave-partition you expect to serve to
 the worker nodes (You should see entries for /apps if you)
b) Then restart nfs

$  service nfs restart

On each worker
a) mkdir /data
b) mount <master IP>:/data /data
  • No labels