Advanced tasks at Eucalyptus
virtual cluster scripts, documentation is here: $VIRTUALCLUSTER_HOME/docs/README
skip: *on other systems, **Setting credentials*, where echo $VIRTUALCLUSTER_HOME
/global/common/carver/tig/virtualcluster/0.2.2
- Log into carver.nersc.gov
change shell to bash or python 2.4 will crash - Load the Eucalyptus tools + virtual cluster tools, order of loading matters
module load tig virtualcluster euca2ools - setup system variables & your credentials stored in your 'eucarc' script:
source key-euca2-balewski-x509/eucarc (for bash)
make sure EUCA_KEY_DIR is set properly - create and format EBS volume:
ec2-create-volume --size 5 --availability-zone euca
VOLUME vol-82DB0796 5 euca creating 2010-12-17T20:24:45+0000- check the volume is created
euca-describe-volumes
VOLUME vol-82DB0796 5 euca available 2010-12-17T20:24:45+0000 - Create an instance: euca-run-instances -k balewski-euca emi-39FA160F and check it runs: euca-describe-instances
- Attach EBS volume to this instance : euca-attach-volume -i i-508B097C -d /dev/vdb vol-82DB0796
euca-attach-volume -i <instance-id> -d /dev/vdb <volumeid> - ssh to this instance and format the EBS volume: ssh -i key-euca2-balewski-x509/balewski-euca.private root@128.55.56.58
yes | mkfs -t ext3 /dev/vdb
mkdir /apps
mount /dev/vdb /apps - terminate this instance : euca-terminate-instances i-508B097C
- check the volume is created
- re-mount already formatted EBS disk to a single node## start VM, attach volume## ssh to VM, do mkdir /apps;mount /dev/vdb /apps
- setup & deploy VM cluster using a common EBS volume
- Create a .conf on local machine and edit appropriately, following $VIRTUALCLUSTER_HOME/docs/sample-user.conf
cp ...sample-user.conf key-euca2-balewski-x509/virtualClusterA.conf , set properly : EBS_VOLUME_ID=vol-82DB0796 - export CLUSTER_CONF=/global/homes/b/balewski/key-euca2-balewski-x509/virtualClusterA.conf
- launch your 3-nodes cluster, it will be named 'balewski-cent' , do: vc-launcher newCluster 3
- after many minutes check if # launched instances matches, oly head node will have good IP , do euca-describe-instances
- ssh to the head node, do ssh -i key-euca2-balewski-x509/balewski-euca.private root@128.55.56.49
- ssh to a worker node from the head node, do : ssh root@192.168.2.132
- verify the EBS disk is visible, do : cd /apps/; ls -l
- Create a .conf on local machine and edit appropriately, following $VIRTUALCLUSTER_HOME/docs/sample-user.conf
- terminate cluster, do : vc-launcher terminateCluster balewski-cent
- aaa
Trouble shooting:
- version of python whould be at least 2.5, to test it type:
which python
/usr/bin/env python - sssdes
Not tested instruction how to setup NFS on a worker node
For doing this manually, you can use the standard linux distribution instructions on how to do that. Here are some high-level instructions based on how the virtual cluster scripts does it for CentOS. On the master a) You will need an entry for each slave-partition you expect to serve to the worker nodes (You should see entries for /apps if you) b) Then restart nfs $ service nfs restart On each worker a) mkdir /data b) mount <master IP>:/data /data