You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 26 Next »

I have actually already have a nersc cert. It's automatic with your account.


Test a gridftp from  from RCF site

  1. ssh rssh.rhic.bnl.gov
  2. ssh to stargrid node (01,02,03,04) , e.g.  ssh   stargrid01
  3. create proxy at stargrid node : myproxy-logon -s nerscca.nersc.gov -t 240 
    myproxy-logon -s nerscca.nersc.gov
    Enter MyProxy pass phrase: [enter PDSF NIM password]
    A credential has been received for user balewski in /tmp/x509up_u3329
    
    You specify how long you want your cert to last with '-t hours' option.  The limit is ~11 days.
  4. verify proxy expiration time : grid-proxy-info
    Here's an example of a proxy  on our ALICE installation with  CERN cert
    subject  : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=jporter/CN=482817/CN=Jeff Porter/CN=1269780641
    issuer   : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=jporter/CN=482817/CN=Jeff Porter
    identity : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=jporter/CN=482817/CN=Jeff Porter
    type     : Proxy draft (pre-RFC) compliant impersonation proxy
    strength : 512 bits
    path     : /tmp/x509up_u49514
    timeleft : 39:58:38  (1.6 days)
    
  5. transfer of a single file from RCF to PDSF:  
stargrid01:~$
globus-url-copy file:///star/data13/Magellan/source
gsiftp://pdsfdtn1.nersc.gov/project/projectdirs/star/target

globus-url-copy -r -p 2 file:///star/data13/Magellan/daq_manual/034/
gsiftp://dtn01.nersc.gov//global/scratch/sd/balewski/w2011/test34/

cvrsvc06 ~]$
globus-url-copy file:///global/scratch/sd/balewski/w2011/test2/big2GBfile.daq  gsiftp://stargrid02.rcf.bnl.gov/star/data13/Magellan/new2GB.daq

Remote execution of command :

stargrid01:~/0x$globus-job-run carvergrid.nersc.gov /bin/csh \-c "ls /global/scratch/sd/balewski/w2011/"

globus-job-run pdsfgrid.nersc.gov /bin/csh -c "ls /home/ibhadju/temp/simPreStage/"

Synchronize a disk with HPSS

Upload dedicated ssh-key to http://www.globusonline.org/
Then from my laptop I did:

ssh -i .ssh/id_dsa-globus-balewski -t cli.globusonline.org endpoint-list
should show nothing.  But add the -p,
This should produce a public list that includes nersc#dtn & nersc#hpss. (always just use the -t to hide any passwords you might need ...)
...
nersc#dtn
nersc#dtn_new
nersc#hpss
...

Activate both those endpoints,

ssh -t cli.globusonline.org endpoint-activate nersc#*
This will prompt for a pass phrase - use your nim password.

Now list again & this should show the endpoints with a time associated with it ( we can increase the time to 11 days as needed).
ssh -i .ssh/id_dsa-globus-balewski -t cli.globusonline.org endpoint-list
Warning: No xauth data; using fake authentication data for X11 forwarding.
nersc#dtn     11:59:28
nersc#dtn_new 11:59:28
nersc#hpss    11:59:28
Connection to cli.globusonline.org closed.


Now copy 1 file from disk to HPSS .  Here's my example.

balewski-mac:~ balewski$
echo "nersc#dtn/global/scratch/sd/balewski/w2011/starFebB/036/st_physics_12036062_raw_1010001.daq nersc#hpss/home/b/balewski/test1/aa.28MBb " | ssh -i .ssh/id_dsa-globus-balewski -t cli.globusonline.org transfer


balewski-mac:~ balewski$ ssh -i .ssh/id_dsa-globus-balewski -t cli.globusonline.org  details  72940a7a-49fd-11e0-8d56-123139054450
Warning: No xauth data; using fake authentication data for X11 forwarding.
Task ID          : 72940a7a-49fd-11e0-8d56-123139054450
Task Type        : TRANSFER
Parent Task ID   : n/a
Status           : SUCCEEDED
Request Time     : 2011-03-09 03:29:31Z


If this all works, then try to sync a directory tree from a source to a destination
- here is my example from disk to hpss

echo "nersc#dtn/global/scratch/sd/balewski/w2011/starFebB/ nersc#hpss/home/b/balewski/w2011VM/  -r -s 1 " |  ssh -i .ssh/id_dsa-globus-balewski -t cli.globusonline.org transfer

You need the slashes at the end of the directory names.  The 's 1' means use filesize to
determine success.
 I believe the '-r' is recursive and '-s 1' is use filesize for sync.  '-s 3' uses md5sums.  The details are:

http://www.globusonline.org/beyondbasics#chap-xfer
Let me know how that goes.   The same syntax could be used to copy files/directories between
 rcf and nersc once the rcf endpoint is configured.

Q: what is the path to /star/data13/ visible to globus?

The path is via any of 3 machines:  stargrid02, stargrid03, stargrid04.   
I think stargrid04 is the one to use.  Therefor you will need to define 
that endpoint.  I just made my definition public, so you should be able to 
see porter#star4 as an endpoint.  You will need to activate it 
- BUT here is the next step.  NERSC recognizes your cert, stargrid04 doesn't.  
 For star to recognize it, you need to make a
*** web request with that CERT ***.
To make a web request you need to load it into your browser.

Ok, It's a pain but at least it's a one-time operation.  I'll show you the steps.

1) on a stargrid node,  do the 'myproxy-logon -s nerscca.nersc.gov'
2) copy the proxy file, /tmp/x509up_whateveritisforyou, to your laptop.
3) change it to pk12 format:

openssl pkcs12 -in yourcopiedoverfile -export -out jans-nersc.p12

It will ask for a password to encrypt it.

4) import into your browser.  I'll assume firefox:

preferences->Advanced->Encryption->View Certificates  Then click 'import'.
It will ask for the password of the file
(it may first ask for your password if yuo've locked the browser)

5) Once that's installed (it will be in the list in the preferences panel) go to:

https://vo.racf.bnl.gov:8443/voms/star/

It should see your cert (displayed on the webpage). From the page,
request to be a member of the STAR VO.  You'll need to reply to an automatic email,
but then Jerome will be notified.





Optimization of globus transfer, various suggestions

guc is globus-url-copy - acronym invented by Levente
srm is a software layer over that - it makes sure transfers complete

There is a –r recursive copy option;

Or else you can do a loop in shell:
set daq   = (1380001.daq 1380003.daq 1380004.daq);
foreach f ($daq)
globus-url-copy -p 25 file:/star/bla/bla/$f
gsiftp://pdsfgrid.nersc.gov/bla/bla/bla/$f
end


 If you specify a source ending in "XXX/" , it will treat XXX as a
directory and transfer all files in XXX from the XXX directory

Use -p 25 option for having more streams, but isn't 25 overdoing it? You got 1 file
cut into 25 buffer chunks and all this has to be re-assembled on arrival.
I'd try +-2 around 8.

Use different end-point machine:  dtn01.nersc.gov  to avoid STAR,ATLAS conflicts

there is also carvergrid.nersc.gov gatekeeper I have access too.

 pull from carver, you need to load the osg module ,and then you have the globus commands available

globus-job-run works like remote ssh command
Look at this page:
http://rcsg-gsir.imsb-dsgi.nrc-cnrc.gc.ca/globus_tutorial/#running
for globus-job-run, globus-job-submit and globus-job-get-output
(job is a command or script, not necessarily something that goes into
the batch system)

dtn01 is a data transfer node. It has a gsiftp server but no gatekeeper.
carvergrid has both but dtn01 has a 10Gb erthernet and carvergrid only 1Gb.
You can use dtn01 for globus-url-copy and carvergrid for globus-job... type commands.

 You can test a gridftp from stargrid node (01,02,03,04) right now to either gsiftp://dtn01.nersc.gov/global/scratch/sd/balewski/...  or to
gsiftp://pdsfdtn1.nersc.gov/project/projectdirs/star/...

Testing globus, by Jeff:

Ok.  Let's do a transfer.

1st check for endpoints (assuming your globus online user name is same as on your laptop, otherwise add yourname@ in front of the cli...)

ssh -t cli.globusonline.org endpoint-list

should show nothing.  But add the -p,

ssh -t cli.globusonline.org endpoint-list -p

This should produce a public list that includes nersc#dtn & nersc#hpss.
(always just use the -t to hide any passwords you might need ...)

Activate both those endpoints,

ssh -t cli.globusonline.org endpoint-activate nersc#*

This will prompt for a pass phrase - use your nim password.

Now list again & this should show the endpoints with a time associated with it
( we can increase the time to 11 days as needed).

ssh -t cli.globusonline.org endpoint-list




Jeff
  • No labels