In order to efficiently use our computational resources, we ask all group members to follow the guidelines below when planning and running simulations:
Please note that we attempted to implement the OpenPBS queue system on Cyrus1 and Quantum2 in December 2009; these systems appeared to be working in testing, but did not perform as desired when multiple jobs were submitted. The use of these queuing systems on those clusters has been suspended until further notice.
Darius is our newest cluster, with 30 computational nodes, each with 8 virtual processors.
More details will be posted soon.
Hostname: cyrus1.csbi.mit.edu
Cyrus1 is a 24-node cluster, installed December 2008.
Please note that the CSBi network, on which Cyrus1 is hosted, does not allow access from IP addresses external to MIT. For remote access to Cyrus1, see the MIT IST VPN site.
Node 10 (n010) is currently inaccessible.
The table below lists the memory and swap file size of each node on Cyrus1, along with the amount of space used and free in each node's local /scratch/ directory. This information is current as of April 1, 2010.
node |
memory (MB) |
swap (MB) |
scratch used |
scratch free |
head |
7982 |
16386 |
|
|
n001 |
7982 |
8197 |
147G |
52G |
n002 |
7982 |
8197 |
51G |
148G |
n003 |
7982 |
8197 |
50G |
149G |
n004 |
7982 |
8197 |
102G |
97G |
n005 |
7982 |
8197 |
29G |
170G |
n006 |
7982 |
8197 |
11G |
188G |
n007 |
7982 |
8197 |
47G |
152G |
n008 |
7982 |
8197 |
66G |
133G |
n009 |
7982 |
8197 |
84G |
115G |
n010 |
|
|
|
|
n011 |
7982 |
8197 |
27G |
171G |
n012 |
7982 |
8197 |
126G |
73G |
n013 |
7982 |
8197 |
23G |
176G |
n014 |
7982 |
8197 |
50M |
198G |
n015 |
7982 |
8197 |
22G |
177G |
n016 |
7982 |
8197 |
33M |
198G |
n017 |
7982 |
8197 |
63G |
135G |
n018 |
7982 |
8197 |
60G |
139G |
n019 |
7982 |
8197 |
2.1G |
196G |
n020 |
7982 |
8197 |
61G |
138G |
n021 |
7982 |
8197 |
33M |
198G |
n022 |
7982 |
8197 |
90G |
109G |
n023 |
7982 |
8197 |
33G |
166G |
n024 |
7982 |
8197 |
29G |
170G |
Hostname: quantum2.mit.edu
Quantum2 is a 20-node cluster installed in October 2007, which features high-memory nodes for quantum-chemical calculations.
Nodes 4 and 14 (n004 and n014) are currently inaccessible, due to apparent hard disk problems.
The table below lists the memory and swap file size of each node on Quantum2, along with the amount of space used and free in each node's local /scratch/ directory. This information is current as of April 1, 2010.
node |
memory (MB) |
swap (MB) |
scratch used |
scratch free |
n001 |
7970 |
16386 |
104G |
90G |
n002 |
7970 |
16386 |
163G |
31G |
n003 |
7970 |
16386 |
13G |
181G |
n004 |
|
|
||
n005 |
7970 |
16386 |
150G |
44G |
n006 |
7970 |
16386 |
138G |
56G |
n007 |
7970 |
16386 |
194G |
0 |
n008 |
7970 |
16386 |
182G |
12G |
n009 |
7970 |
16386 |
155G |
39G |
n010 |
7970 |
16386 |
133G |
61G |
n011 |
7970 |
16386 |
143G |
51G |
n012 |
7970 |
16386 |
122G |
72G |
n013 |
7970 |
16386 |
153G |
41G |
n014 |
|
|
||
n015 |
7970 |
16386 |
6.8G |
187G |
n016 |
7970 |
16386 |
15G |
180G |
n017 |
3942 |
16386 |
22G |
172G |
n018 |
7970 |
16386 |
16G |
178G |
n019 |
16026 |
16386 |
39G |
155G |
Software Available:
A number of software packages have been installed in /home/gpw501/software/
Please contact gwood@mit.edu for any questions, a very limited usage guide and is given below
GAMESS-US quantum chemistry package.
Usage: /home/gpw501/software/gamess/rungms JOB VERNO NCPUS >& JOB.log &
JOB is the name of JOB.inp file to be executed
VERNO is the current version of gamess (01 at the time of writing)
NCPUS number of cpus
CPMD (QMMM version) plane-wave/PP quantum CP and BO dynamics.
Usage: mpirun -n NCPUS cpmd.x JOB PATH-TO-PPs >& JOB.out &
mpirun should be set to /opt/openmpi/tcp-gnu/bin/mpirun in your .bashrc
NCPUS number of cpus
JOB name of job file to be executed
PATH-TO-PPs the path to a pseudo potential library (see for eg /home/gpw501/software/cpmd/pseudos/ )
JOB.out name of output file
Amber10 amber molecular dynamics program.
see amber manual for usage executables are located in /home/gpw501/software/amber10/exe/
Gromacs molecular dynamics program.
Installed as root so binaries for MD and the gromacs tools are located in /usr/local/bin/
see website for details: www.gromacs.org/
propka electrostatic and pka computations for proteins.
Usage: see /home/gpw501/software/propka2.0src/README_PROPKA2.0