Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Problem w/ large (~0.5+ GB) file transfer: there are 2 types of disks:
    • local volatile /mnt of size ~140GB
    • permanent EBS storage (size ~$$$)
      scp of binary (xxx.gz) to EBS disk result with corruption (gunzip would complain). Once the file size was off by 1 bit (of 0.4GB). It was random, multiple transfers would succeed after several trails. If multiple scp were made simultaneously it would get worse.
      Once I change destination to /mnt disk and did one transfer at a time all probelms were gone - I scp 3 files of 1GB w/o a glitch. Later I copied files from /mnt to EBS disk took ~5 minutes per GB).

Nov 14: transfer of 1GB from rcf <--> Amazon takes ~5 minutes.

Launching nodes

Nov 13 :
*Matt's customized Ubuntu w/o STAR software - 4-6 minutes, the smallest machine $0.10
*default public Fedora from EC2 : ~2 minutes
*launching Cloudera cluster 1+4 or 1+10 seems to take similar time of ~5 minutes

...

Make sure to assign proper zone if you use EBS disk

Computing speed

Task description

I have exercised the Cloudera AMI package, requested 1 master+10 nodes. The task was to compute PageRank for large size set of interlinked pages.
I was given a dump of all Wikipedia pages HM5,6 in the format:
<page><title>The Title</title><text>The page body</text></page>, one line of text per page, the (human typed in ) content was extremely non-homogenous, multi-lingual, with many random characters and typos.
I wrote 4 python string processing functions:

  1. init converting input text to <key,value> format (my particular choice of the meaning )
  2. mapp and reduce functions, run in pair, multiple iterations
  3. finish function exporting final list of pages ordered by page rank.
  4. I allocated the smallest (least expensive) CPUs at EC2 : ami=ami-6159bf08, instance_type=m1.small
    The goal was to perform all ini + N_iter + fin steps using 10 nodes & hadoop framework.
Test 1: execution of the full chain for 1+2 iter+1 job, using a ~10% sub-set of wikipedia pages (enwiki-20090929-one-page-per-line-part3)
  • the unzipped file had size of 2.2GB ASCII , contained 1.1M lines (original pages) which pointed to 14M pages (outgoing links, include self reference, non unique). After 1st iteration the # of lines (pages which are pointed to by any of the original ) grew to 5M pages and stabilized.
  • I brought part3.gz file to the master node & unzip it on the /mnt disk (has enough space (took few minutes)
  • I stick to the default choice to run 20 mappers and 10 reducers for every step (for 10-node cluster)
    Timing results
  1. copy local file to HDFS : ~2 minutes
  2. init : 410 sec
  3. mapp/reduce iter 0 : 300 sec
  4. mapp/reduce iter 1 : 180 sec
  5. finish : 190 sec
    Total time was 20 minutes , 11 CPUs were involved.