ssh pdsf.nersc.gov

STAR disks:

0.5TB pdsf.nersc.gov:/eliza17/star/pwg/starspin/balewski/

1.5 TB /eliza14/star/pwgscr

STAR software specific instructions for Eucalyptus

Matt started problem-tracking page:

https://docs.google.com/document/d/180dwYTO3iBB42DbmD7zcTKUfU_CZpY903TfJM7udTIc/edit?hl=en&authkey=CMvVt8AK&pli=1#

e-mail to NERSC: Consult <consult@nersc.gov>

Quota on /project disk, from Eric

You can use prjquota to check the quotas on NFG (/project) like this:
pdsf4 88% prjquota star
           ------ Space (GB) -------     ----------- Inode -----------
 Project    Usage    Quota   InDoubt      Usage      Quota     InDoubt
--------   -------  -------  -------     -------    -------    -------
    star      1011     1536        1      842365    1000000       1075
pdsf4 89%

So STAR has a quota of 1.5TB now.

Alternatives to scp

STAR users have access to  /project/projectdirs/star and this area
is visible from all NERSC systems (both carver and PDSF and the data
transfer nodes). Best way to transfer data from BNL to the project area
would be  through the data transfer nodes:
http://www.nersc.gov/nusers/systems/datatran/
but there are lots of options.

From Matt - how to add cron tab job

Here are some commands for inserting things into cron.
There is a directory called /etc/cron.daily which where you can put scripts that will run once per day.
 For instance, you can put a controller script doResetDB.sh in that directory that runs resetDB.sh
with the right arguments. Make sure to make it executable.

You can control when the script will run by editing the line in /etc/crontab that 
says "run-parts /etc/cron.daily". I didn't look in any of your machines, but the ones I have are 
setup to run at 4:02 every morning.

Controlling this behavior is as easy as moving the script in and out of the /etc/cron.daily directory.

Transport of DB-snapshot file from RCF to carver

Hi Jan,
In my home directory on carver, there is a script called doDownloads.sh. This script is what needs to 
be run. Right now it runs on cvrsvc03. It sleeps for 24 hours, then runs a script called 
downloadSnapshot.sh in the same location.

Best,
Matt


Direct upload of DB snapshot from RCF

assuming you have curl installed, here is a safe one-liner to
download snapshots :

curl -s --retry 60 --retry-delay 60 http://www.star.bnl.gov/dsfactory/
--output snapshot.tgz

It will automatically retry 503 errors 60 times, waiting 60 seconds
between attempts. Should be safe enough to add it to VMs directly..

"-s" means silent => no output at all. You may want to remove this
switch while you test..

share data with STAR members

setfacl -m g:rhstar:x  /global/scratch/sd/balewski - this is dangerous
setfacl -R -m g:rhstar:rX  /global/scratch/sd/balewski/2011w

How to start VM with STAR environment

1) copy my setup code from carver.

  • SVN @ NERSC

http://www.nersc.gov/nusers/systems/servers/cvs.php


  • Running Interactive Jobs on Carver
    You can run an xterm in batch by doing a "qsub -I -q regular ...."

http://www.nersc.gov/nusers/systems/carver/running_jobs/interactive.php
Note that interactive jobs do not have to go to the interactive queue.

E.g. in side screen run this command (1 node, 1 core)

qsub -I -V -q interactive -l nodes=1:ppn=1 

brings you in to another shell which is treated as batch but is interactive.

Misc info about resources at NERSC/PDSF
https://newweb.nersc.gov/users/computational-systems/pdsf/using-the-sge-batch-system/i-o-resources/


  • Reboot VM
    From inside: shutdown -r now
    From outside: euca-reboot-instances i-446F07EE
  • No labels