This page is under construction

Executive Summary

Version 2 of the MIT core identity provider is based on version 2.1.x of Internet2's Shibboleth IdP package. Including the IdP software itself, the following major components are required:

  • Apache httpd 2.2 (from stock RHEL httpd RPM)
  • mod_ssl (from stock RHEL mod_ssl RPM)
  • Tomcat 6.0
  • JDK 6.0 (from Sun, plus enhanced JCE and security policy)
  • Shibboleth IdP 2.1
  • terracotta 3.1
  • MySQL 5.0 (from stock RHEL RPMs)
  • Shibboleth SP 2.3.x (from Internet2 RPMs)
  • Cams web application
  • cams-ldap (CAMS/LDAP integration)

In the configuration documented below, the Apache web server will listen on the following TCP ports:

  • 80 (HTTP)
  • 443 (SSL virtual host for HTTPS)
  • 8443 (SSL virtual host for SP's back-channel SOAP calls for attributes)

The terracotta server will listen on the following TCP ports (connections should only be allowed from the peer node(s)):

  • 9510 (client-to-server)
  • 9520 (JMX)
  • 9530 (server-to-server)

The MySQL server will listen on the following TCP port:

  • 3306

Note that the terracotta and MySQL listeners only need to accept connections from peer servers in the cluster, so these ports should be configured accordingly in the firewall.

The following certificates/keys need to be created:

  • MIT SSL server certificate (CN idp.mit.edu)
  • daemon keytab (i.e. daemon/idpe.mit.edu@ATHENA.MIT.EDU)
  • application client certificate (CN touchstone-cams.app.mit.edu)

The following log files will be used:

  • Apache httpd log files in /var/log/httpd/:
    • ssl_access_log
    • ssl_request_log
    • ssl_error_log
    • idp-attr-query_access_log
    • idp-attr-query_request_log
    • idp-attr-query_error_log
    • access_log
    • error_log
  • Shibboleth IdP log files in /usr/local/shibboleth-idp/logs/:
    • idp-process.log
    • idp-access.log
    • idp-audit.log
  • Tomcat logs in /usr/local/tomcat/logs/
    • catalina.out
  • terracotta system logs in /usr/local/terracotta/logs/:
    • terracotta.log
    • run-dgc.cron.log
  • terracotta cluster logs in /usr/local/shibboleth-idp/cluster/:
    • client/logs-127.0.0.1/terracotta-client.log
    • server/logs/terracotta-server.log

SELinux

SELinux must run in Permissive mode. Otherwise, the Shibboleth SP Apache module will not be able to connect to the shibd socket, and mysqld will not be able to load in the shared library used by cams-ldap.

To set SELinux permissive mode at boot time, change the SELINUX setting in /etc/selinux/config:

SELINUX=permissive

To set permissive mode on the running system only:

# setenforce Permissive

Firewall

Make sure that the additional port used by the IdP are enabled in the firewall. Use the command "iptables --list -n --line-numbers" to determine the proper rule number; the following example assumes we are inserting rules beginning at number 36. Also replace 18.x.y.z with the appropriate IP address of the peer node in the cluster, not the local host.

# iptables --list -n --line-numbers
# iptables -I RH-Firewall-1-INPUT 36 -m state --state NEW -m tcp -p tcp --dport 8443 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 37 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 3306 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 38 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 9510 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 39 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 9520 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 40 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 9530 -j ACCEPT
# /etc/init.d/iptables save

Install and configure Apache httpd

We use the native Red Hat RPMs (httpd 2.2).

Install needed RPMs
  • Use stock httpd RPM install (standard NIST install)
  • Install mod_ssl and mod_auth_kerb RPMs:
    # yum install mod_ssl
    
Configure

Current versions of the various httpd configuration files can be obtained in the touchstone locker, in /mit/touchstone/config/idp2-cams/httpd/.

  • Install the server certificate, key, and CA files in /etc/pki/tls/certs/ and /etc/pki/tls/private/, as appropriate, and make sure the paths are correct in ssl.conf and idp-attr-query.conf (see below). The key file should be readable by only the tomcat user, as the idp webapp also uses it.
  • In /etc/httpd/conf/httpd.conf, set ServerName:
    ServerName idp.touchstonenetwork.net:80
    
    and set the UseCanonicalName option to On:
    UseCanonicalName On
    
  • Disable the stock "Welcome" page, by commenting out the lines in /etc/httpd/conf.d/welcome.conf
  • In /etc/httpd/conf.d/ssl.conf, set the SSLRandomSeed options:
    SSLRandomSeed startup file:/dev/urandom 1024
    SSLRandomSeed connect file:/dev/urandom 1024
    
    within the VirtualHost block, set the ServerName:
    ServerName idp.touchstonenetwork.net:443
    
    set the SSL cipher suite:
    SSLCipherSuite HIGH:MEDIUM:EXP:!aNULL:!SSLv2:+SHA1:+MD5:+HIGH:+MEDIUM:+EXP
    
    Install the server certificate, key, and CA files in /etc/pki/tls/certs/ and /etc/pki/tls/private/, as appropriate, and set the paths in ssl.conf:
    SSLCertificateFile /etc/pki/tls/certs/idp.touchstonenetwork.net-cert.pem
    SSLCertificateKeyFile /etc/pki/tls/private/idp.touchstonenetwork.net-key.pem
    SSLCertificateChainFile /etc/pki/tls/certs/EquifaxCA.pem
    SSLCACertificateFile /etc/pki/tls/certs/mitCAclient.pem
    
    set the SSL options:
    SSLOptions +StrictRequire
    
    configure custom logging:
    CustomLog logs/ssl_request_log \
        "%t %h %{HTTPS}x %{SSL_PROTOCOL}x %{SSL_CIPHER}x %{SSL_CIPHER_USEKEYSIZE}x %{SSL_CLIENT_VERIFY}x \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\""
    
    ensure that all access is via SSL:
    <Directory />
        SSLRequireSSL
    </Directory>
    
    ensure that all rewrite rules are inherited:
    RewriteEngine On
    RewriteOptions inherit
    
  • Install these additional conf files from the touchstone locker (/mit/touchstone/config/idp2-cams/httpd) in /etc/httpd/conf.d:
    • cams.conf
      This adds configuration to protect Cams application resources appropriately.
    • idp-attr-query.conf
      This sets up the vhosts for back-channel attribute queries on port 8443.
    • idp-rewrite.conf
      This adds various rewrite rules for compatibility, etc.
    • proxy_ajp.conf
      Configures the AJP proxy module for the idp and cams webapps (replaces version installed by httpd).
    • ssl.conf (see above)
    • welcome.conf (see above)
  • Install our standard robots.txt and favicon.ico files in /var/www/html. The robots.txt should disallow all access:
    User-agent: *
    Disallow: /
    
    Current versions of these files may be found in the touchstone locker, in /mit/touchstone/config/htdocs/.
  • Make sure httpd is started at boot time:
    # chkconfig httpd on
    
  • Add the following settings to the stock /etc/logrotate.d/httpd configuration file:
    daily
    rotate 100
    compress
    delaycompress
    
    This will cause the httpd log files in /var/log/httpd/ to be rotated daily and compressed, saving 100 days of old logs (in case we need them for quarterly metrics).

Install JDK and enhanced JCE

  • The IdP uses JDK 1.6; download and install the RPM from Sun, or use the version in the downloads directory in the touchstone locker (jdk-6uNN-linux-amd64.rpm, where NN is the update number):
    # rpm -Uvh jdk-6uNN-linux-amd64.rpm
    
  • To support additional cryptographic algorithms used by the IdP, download and install the Bouncy Castle JCE jar file (http://polydistortion.net/bc/index.html) in the lib/ext directory of the JRE (/usr/java/latest/jre/lib/ext/). For example:
    # cd /usr/java/latest/jre/lib/ext
    # cp /path/to/bcprov-jdk16-145.jar .
    
    (Replace the file version number as needed).
    Add it as a provider in in the JRE's lib/security/java.security, e.g.:
    security.provider.9=org.bouncycastle.jce.provider.BouncyCastleProvider
    
    (Replace 9 with the next sequential provider number as needed).
  • We want to ensure that DNS lookups are not cached indefinitely. Set the networkaddress.cache.ttl property in java.security accordingly:
    networkaddress.cache.ttl=30
    
  • To support use of crypto key sizes larger than 2048 bits, we also add the Unlimited Strength Security Policy to the JVM. Download jce_policy-6.zip from the locker downloads directory, or from Sun (http://java.sun.com/javase/downloads/index.jsp, Other Downloads section at the bottom). Unzip the policy zip file and copy local_policy.jar and US_export_policy.jar into the JRE's lib/security directory (replacing the versions installed from the JDK RPM).
    # cd /tmp
    # unzip /path/to/jce_policy-6.zip
    # cd jce
    # cp *.jar /usr/java/latest/jre/lib/security/
    
  • For convenience, install shell profile scripts in /etc/profile.d that define JAVA_HOME, e.g. java.csh:
    setenv JAVA_HOME /usr/java/default
    if ( "${path}" !~ *${JAVA_HOME}/bin* ) then
        set path = ( ${JAVA_HOME}/bin $path )
    endif
    
    java.sh:
    export JAVA_HOME=/usr/java/default
    if ! echo $PATH | grep -q ${JAVA_HOME}/bin ; then
        export PATH=${JAVA_HOME}/bin:$PATH
    fi
    

Install Tomcat

  • Download current Tomcat 6.0 binary distribution (tested with 6.0.20, available in /mit/touchstone/downloads/apache-tomcat-6.0.20.tar.gz, and install under /usr/local:
    # cd /usr/local
    # tar xzf /path/to/apache-tomcat-6.0.20.tar.gz
    # rm -f tomcat
    # ln -s apache-tomcat-6.0.20.tar.gz tomcat
    
  • Create the tomcat user, and change the ownership of the tomcat tree:
    # groupadd -g 52 tomcat
    # useradd -u 52 -g tomcat -c "Tomcat User" -d /usr/local/tomcat -M -s /sbin/nologin tomcat
    # chown -R tomcat:tomcat /usr/local/apache-tomcat-6.0.20
    
  • Install our version of conf/server.xml (from /mit/touchstone/config/idp2-core/tomcat), which properly configures the AJP connector on port 8009, and disables the HTTP connector on port 8080.
  • Edit /usr/local/tomcat/conf/context.xml, and add the following Resource element within the <Context> element:
         <Resource
              auth="Container"
              defaultAutoCommit="false"
              maxActive="5"
              maxIdle="2"
              maxWait="5000"
              name="jdbc/CAMS"
              removeAbandoned="true"
              removeAbandonedTimeout="30"
              type="javax.sql.DataSource"
              validationQuery="SELECT 1"
              testOnBorrow="true"
              testWhileIdle="true"
              timeBetweenEvictionRunsMillis="10000"
              minEvictableIdleTimeMillis="60000"
              username="camsusr"
              password="XXX"
              driverClassName="com.mysql.jdbc.Driver"
              url="jdbc:mysql://idp-cams-1:3306/cams"
         />
    
    Set the username and password attributes as needed. Note that the url must point at the host that is the MySQL master.
  • We will require the MySQL connector for Java; download and copy this into the tomcat library directory, e.g.:
    # cp /path/to/mysql-connector-java-5.1.12-bin.jar /usr/local/tomcat/lib/
    
  • Install the tomcat init script (from /mit/touchstone/maint/init/tomcat) in /etc/init.d/, and make sure tomcat is started at boot time:
    # chkconfig --add tomcat
    

MySQL

We use the native Red Hat RPMs (5.0), part of the standard NIST install.

Database initialization

Start up the daemon, and secure the installation:

# /etc/init.d/mysqld start
# mysql_secure_installation

Respond to the prompts to set the root password, remove anonymous users, disallow remote root logins, and remove the test database.

Make sure the daemon starts at boot time:

# chkconfig mysqld on

We use master/slave replication, where all queries go against one MySQL master server (e.g. idp-cams-1), while the other server (e.g. idp-cams-2) operates in slave mode, i.e. with updates to the master replicated to the slave. Set up the master server first, before setting up replication.

To set up the Cams database, restore from the most recent good backup on to the master.

# mysql -u root -p < /path/to/most-recent-backup.sql

If initializing a new database for some reason, process the database schema file, a copy of which can be found in /mit/touchstone/config/idp2-cams/cams/camsdb.sql.

Grant tables

The grant tables will likely need to adjusted when moving an existing database to a new server host, e.g. if the master or slave host names are changing.

The CAMS application will use the camsusr account to access the CAMS database; the Shibboleth IdP resolver will use the shibresolver account; the database backup cron job uses the backup account; the cams-ldap daemon uses the camsldap account. Create the following accounts as needed (replace <password> with the password for that account):

# mysql
mysql> GRANT ALL ON cams.* TO 'camsusr'@'localhost' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL ON cams.* TO 'camsusr'@'idp-cams-1.mit.edu' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL ON cams.* TO 'camsusr'@'idp-cams-2.mit.edu' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT ON cams.* TO 'shibresolver'@'localhost' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT ON cams.* TO 'shibresolver'@'idp-cams-1.mit.edu' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT ON cams.* TO 'shibresolver'@'idp-cams-2.mit.edu' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT, LOCK TABLES, FILE, RELOAD ON *.* TO 'backup'@'localhost' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT ON cams.ExternalUser TO 'camsldap'@'localhost' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> quit
Replication

Make sure that connections are allowed to port 3306 from the peer server only (see firewall instructions above). For instructions on setting up the MySQL master/slave replication, see https://wikis.mit.edu/confluence/display/ISDA/MySQL+Replication+Configuration+Instructions

Maintaining the CAMS database

The backup-db script should be installed in /usr/local/cams/sbin on both the master and slave servers, and run periodically from cron. It will dump all databases to a compressed timestamped file in /usr/local/cams/backup/local, and also copy this file over to the peer server's /usr/local/cams/backup/remote directory. To set up the procedure, do the following:

  • Create the /usr/local/cams/backup, /usr/local/cams/backup/local, and /usr/local/cams/backup/remote directories.
  • Create the /usr/local/cams/sbin directory, if necessary.
  • Install the backup-db script in /usr/local/cams/sbin on the MySQL master server.
  • Create the backup user in the database on the master, as above.
  • Create /usr/local/cams/conf/backup.cnf, with the username and password for the backup account:
    [client]
    user=backup
    password=<password>
    
  • Install the cams cron file (cams.cron) as /etc/cron.d/cams. This will run the backup every 6 hours. (It also will run the /usr/local/cams/sbin/clean-logs script daily).

On the slave server, the check-slave-status script should also be installed in /usr/local/cams/sbin; it should be run once per hour from cron. This should use a special replicatechecker account in MySQL, created as follows (this should be created on the master, after replication has been set up):

# mysql
mysql> GRANT REPLICATION CLIENT ON *.* TO 'replicatechecker'@'localhost' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> quit

where <password> is replaced by the password for the replicatechecker account. Next, create /usr/local/cams/conf/replicatechecker.cnf, with the username and password for the replicatechecker account:

[client]
user=replicatechecker
password=<password>

Install the cams-slave.cron file as /etc/cron.d/cams-slave; this will run the check-slave-status script hourly.

Any problems encountered by either of these procedures will be reported via email to touchstone-support.

cams-ldap

The Cams-to-LDAP integration is done via a trigger library added to the MySQL instance, and a separate Perl daemon which propagates account changes to Moira and LDAP. Set it up as follows:

  • Install the mit-moira RPM; the daemon uses the blanche client.
  • yum install perl-LDAP (This will also bring in perl-IO-Socket-SSL and perl-Net-SSLeay as dependencies)
  • yum install perl-Convert-ASN1
  • Unpack the cams-ldap tarball (available in the builds directory in the touchstone locker), and run its install script:
    # mkdir /tmp/cams-ldap
    # cd /tmp/cams-ldap
    # tar xzf /path/to/cams-ldap.tgz
    # ./install.sh
    
    The install script will not overwrite an existing trigger library is mysqld is running, so you may need to move it into place manually. If installing a new version of the library, you may also need to (re)apply the SQL input file which creates the necessary function and triggers (see below).
  • Make sure the trigger library is configured for the run-time linker (required for mysqld to be able to load it):
    # ldconfig
    
  • On the slave only, disable the cron job which purges the Cams Moira list:
    # rm /etc/cron.d/cams-ldap
    
  • Install a daemon keytab in /usr/local/cams/ldap/keytab. It must be owned by the mysql user. The principal must be added to the from-cams-admin list, which is the owner of the from-cams list, e.g.:
    # blanche from-cams-admin -a kerberos:daemon.idpe@ATHENA.MIT.EDU
    
    (For staging, add the daemon.idpe-staging principal to the list on ttsp)
  • Configure /etc/syslog.conf with the following patch; the daemon uses the LOCAL5 facility:
    40,41c40,41
    < # msql apps
    < local5.info					/var/log/db
    ---
    > # cams-ldap (originally msql apps)
    > local5.*				/var/log/cams-ldap.log
    
    Make this change effective:
    # kill -HUP `cat /var/run/syslogd.pid`
    
  • Install the configuration file /usr/local/cams/ldap/cams-ldap.conf. This file is in Perl syntax. Edit it to make sure that the usernames and passwords for DB and LDAP access are set properly ($dbuser, $dpassword, $ldap_dn, $ldap_password), and that $krb5_princ is set to the daemon principal in the keytab. Make sure that $cafile is set to point at mitCA.pem (install if not already there). Make sure the file is owned by the mysql user and only readable by the owner.
  • Make sure that the camsldap user is added to the MySQL database (see above).
  • Restart mysqld, to make sure that it can link in the trigger library.
  • Start the cams-ldap daemon, and make sure it starts at boot time:
    # /etc/init.d/cams-ldap start
    # chkconfig --add cams-ldap
    
    The daemon should run on both the slave and master servers; the daemon detects when it is running on the slave, and treats the trigger as a no-op.
  • Make sure the trigger function is defined properly in the database. This should be executed on the slave first; check the slave status afterward to make sure it is still running, and restart if necessary.
    # mysql -u root -p cams < /usr/local/cams/ldap/cams-ldap-trigger.sql
    

Install Shibboleth IdP

  • Run the idp application installer from our customized binary distribution, available in /mit/touchstone/builds/NIST/idp2-cams/cams-shibboleth-identityprovider-2.x.y-bin.tgz, and the install script contained therein. For example:
    # cd /tmp
    # rm -rf shibboleth-identityprovider-2.*
    # tar xzf /path/to/cams-shibboleth-identityprovider-2.1.5-bin.tgz
    # cd shibboleth-identityprovider-2.1.5
    # ./install.sh
    [There should be no need to override the default responses to the installer's questions.]
    
    By default (because of one of our customizations to the stock Internet2 distribution) this will install under /usr/local/shibboleth-idp/. The installer will not overwrite the configuration files of an existing installation. For a new installation, the installer will generate a keystore, and prompt for its password; currently we do not use this keystore, so the password does not matter. This distribution contains the standard shibboleth-identityprovider binary distribution, from the Internet2 zip file (http://shibboleth.internet2.edu/downloads/shibboleth/idp/latest/), plus the following customizations:
    • camslogin
      This provides the custom login pages for CAMS users. It is available as a tarball (/mit/touchstone/builds/NIST/idp2-cams/camslogin.tgz) which is unpacked into the top-level directory of the binary distribution.
    • CamsLoginModule (cams-jaas-loginmodule-x.y.jar)
      This is the JAAS login module for CAMS. It is available as a .jar file in /mit/touchstone/builds/NIST/cams-jaas-loginmodule-x.y.jar, where x.y is the version number (currently 1.0). It must be copied into the lib subdirectory of the binary distribution.
    • camsutil-1.0.jar
      This is a helper package used by the login module to validate the username/password. It is available in /mit/touchstone/builds/NIST/camsutil-1.0.jar. It must be copied into the lib subdirectory of the binary distribution along with the login module jar file.
  • The installer will create and populate /usr/local/shibboleth-idp; the web application (war) file will be in /usr/local/shibboleth-idp/war/idp.war, but the current version of the idp.war will be available in the locker (/mit/touchstone/builds/NIST/idp2-mit/idp.war).
  • The idp application, running under Tomcat, needs full access to the install directory, so make sure it is owned by the tomcat user, e.g.:
    # chown -R tomcat:tomcat /usr/local/shibboleth-idp
    
    To ensure that we run the current version of the web application, download the latest idp.war file from the touchstone locker (/mit/touchstone/builds/NIST/idp2-mit/idp.war) and copy it into /usr/local/tomcat/webapps/:
    # cp /path/to/idp.war /usr/local/tomcat/webapps/
    # chown tomcat:tomcat /usr/local/tomcat/webapps/idp.war
    
  • Copy the idp's endorsed jar files to tomcat's endorsed dir:
    # mkdir -p /usr/local/tomcat/endorsed
    # cp -p /usr/local/shibboleth-idp/lib/endorsed/*.jar /usr/local/tomcat/endorsed/
    # chown -R tomcat:tomcat /usr/local/tomcat/endorsed
    
  • Copy in the idp config files for the server, to the conf subdirectory; these include:
    • attribute-filter.xml
    • attribute-resolver.xml
    • handler.xml
    • internal.xml
    • logging.xml
    • login.config
    • relying-party.xml
    • service.xml
    • tc-config.xml (for terracotta clustering)

Terracotta

(See https://spaces.internet2.edu/display/SHIB2/IdPCluster)

The terracotta software can be used to cluster the IdP nodes. Note that currently the Cams IdPs are not clustered, so terracotta should not be running. Each node must run the terracotta server, as well as the instrumented client (tomcat, in our case). The terracotta server operates in either the active or passive role; only one server should be in the "active/coordinator" state at a time.

Download the terracotta tarball; our current version is in the touchstone locker, in /mit/touchstone/downloads/terracotta-x.y.z.tar.gz. Extract it under /usr/local, create a logs directory for it, make it owned by the tomcat user, and symlink /usr/local/terracotta to it. For example (replace 3.1.1 with the appropriate terracotta version number):

# cd /usr/local
# tar xzf /path/to/terracotta-3.1.1.tar.gz
# mkdir -p terracotta-3.1.1/logs
# chown -R tomcat:tomcat terracotta-3.1.1
# rm -f terracotta
# ln -s terracotta-3.1.1 terracotta

The IdP requires the installation of a couple of Terracotta Integration Modules, and the generation of a boot jar file for Tomcat, which is specific to the Java version:

# setenv TC_HOME /usr/local/terracotta-3.1.1
# setenv TC_INSTALL_DIR $TC_HOME
# setenv JAVA_HOME /usr/java/default
# $TC_HOME/bin/tim-get.sh install tim-vector 2.5.1 org.terracotta.modules
# $TC_HOME/bin/tim-get.sh install tim-tomcat-6.0 2.0.1
# $TC_HOME/bin/make-boot-jar.sh -f /usr/local/shibboleth-idp/conf/tc-config.xml
# chown -R tomcat:tomcat /usr/local/terracotta-3.1.1

Be sure to regenerate this jar after installing a new JDK.

Install the init script from /mit/touchstone/maint/shibboleth-idp/terracotta/terracotta.init in /etc/init.d, and make sure it is configured to start at boot time. Note that terracotta must be started before tomcat.

# cp /path/to/terracotta.init /etc/init.d/terracotta
# chmod 755 /etc/init.d/terracotta
# chkconfig --add terracotta

To avoid performance impact during business hours, we disable automatic garbage collection of terracotta objects. Instead, we run a nightly cron job to do the garbage collection manually. Since this should only be done on the active/coordinator node, the script, run-dgc-if-active.sh, checks the server mode, then runs the garbage collector if and only if the server is the active node. Both the script and cron file can be obtained in /mit/touchstone/maint/shibboleth-idp/terracotta/; install as follows:

# cp /path/to/run-dgc-if-active.sh /usr/local/shibboleth-idp/bin/
# cp /path/to/run-dgc.cron /etc/cron.d/run-dgc

Shibboleth SP

The CAMS application needs to authenticate against our IdPs, and so requires the Shibboleth service provider (SP) software to run, as well as the IdP software.

Installation

We use the stock RHEL 5 64-bit RPMs, available from the Internet2 downloads site; the best way to install the RPMs is to use Shibboleth's yum repository, as described in https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPLinuxRPMInstall. To configure the repository, install the repository definition file into /etc/yum.repos.d. Once the repository is configured, you can install the current RPMs, including dependencies, using yum, e.g.:

# yum install shibboleth.x86_64
Configuration

The SP configuration files live in /etc/shibboleth:

  • shibboleth2.xml (main SP configuration file)
  • attribute-map.xml (defines our attribute mappings)
  • native.logger (configures Apache module logging – we modify the stock configuration to log under /var/log/shibboleth instead of /var/log/httpd, because the apache user must have write access to the directory)

Create the directory for the native logger, and make it writable by the Apache user:

# mkdir /var/log/shibboleth/httpd
# chown apache /var/log/shibboleth/httpd

The Apache module will log to the native.log file in this directory.

Note: SELinux must be set to permissive mode in order for the SP to function properly; otherwise (without modifying policy) its Apache module will be unable to connect to shibd's Unix socket (which lives in /var/run/shibboleth/). Edit /etc/selinux/config accordingly.

Make sure the Shibboleth daemon is started at boot time:

# chkconfig shibd on

Installing the CAMS application

We will run the CAMS application in the same Tomcat container as the Shibboleth IdP. Copy the CAMS application war file, cams.war, into /usr/local/tomcat/webapps, and make sure it is owned by the tomcat user.

# cd /usr/local/tomcat/webapps
# cp /path/to/cams.war .
# chown tomcat:tomcat cams.war

Create the application's logs directory as needed:

# mkdir /usr/local/cams/logs
# chown tomcat:tomcat /usr/local/cams/logs
Create a Java keystore containing the application client certificate

The CAMS application uses an application client certificate to authenticate to the Roles web service. The subject CN of the certificate for the production server should be touchstone-cams.app.mit.edu (for the staging server, use touchstone-cams-staging.app.mit.edu). When creating the certificate, make sure that it is an Application Client Certificate, not a standard web server certificate; it should be issued by the MIT Client CA, and must be enabled for client usage.

Once you have the application client certificate, you must convert it to PKCS12 format for importing into a Java keystore. Begin by downloading the MIT Client CA certificate:

# wget 'http://ca.mit.edu/mitClient.crt'

To convert to PKCS12 format (assuming the certificate and private key PEM files are in cams-app-cert.pem and cams-app-key.pem, respectively):

# openssl pkcs12 -in cams-app-cert.pem -inkey cams-app-key.pem -export -out cams-app-cert.p12 -nodes -CAfile mitClient.crt

(Supply the export password as prompted; remember the password for use with the keystore.)

cams-app-cert.p12 now contains the certificate in PKCS12 format. To import it into a keystore, obtain a copy of the ISDA PKCS12Import.jar utility, and invoke it as follows:

# $JAVA_HOME/bin/java -jar PKCS12Import.jar cams-app-cert.p12 cams-app.jks PASSWORD

where PASSWORD is replaced by the actual password you supplied above. Install the resulting keystore file into /usr/local/cams/conf/ (create the directory if necessary). Ensure that it is owned and only readable by the tomcat user.

Next, we need to create a server trust store containing the MIT CA certificates. Begin by copying the standard Java CA certificate store from the Java distribution, e.g.:

# cp $JAVA_HOME/jre/lib/security/cacerts /usr/local/cams/conf/serverTrustStore.jks

Download the MIT CA and, if necessary, MIT Client CA certificates (you should already have downloaded the Client CA above), and import them into the trust store:

# wget 'http://ca.mit.edu/mitca.crt'
# wget 'http://ca.mit.edu/mitClient.crt'
# $JAVA_HOME/bin/keytool -import -keystore /usr/local/cams/conf/serverTrustStore.jks -alias mitca -file mitca.crt
# $JAVA_HOME/bin/keytool -import -keystore /usr/local/cams/conf/serverTrustStore.jks -alias mitClient -file mitClient.crt

The password for the server trust store is "changeit". Answer "yes" to the "Trust this certificate?" prompt.

Finally, we set system (global) properties so that the CAMS and IdP applications use these keystores, by adding the following settings to /usr/local/tomcat/conf/catalina.properties:

javax.net.ssl.keyStore=/usr/local/cams/conf/cams-app.jks
javax.net.ssl.keyStorePassword=PASSWORD
javax.net.ssl.trustStore=/usr/local/cams/conf/serverTrustStore.jks
javax.net.ssl.trustStorePassword=changeit

(Replace PASSWORD with the password you used for the application certificate key store above). Make sure that the catalina.properties file is owned and only readable by the tomcat user.

CAMS application configuration properties

The CAMS configuration.properties file should be installed in /usr/local/cams/conf/configuration.properties. It should be readable only by the tomcat user, as it contains a key for the ReCaptcha service used in account registration. Most of the settings in this file should not need to be changed, but there are two settings which may need to be used to address operational issues, allowing us to disable the creation of new accounts (except by admins), and/or to disable the ReCaptcha service (in case of a problem with the latter). Normally, these settings should be:

enable.create = 1
enable.recaptcha = 1

Change the setting to 0 and restart tomcat to disable the function. (If ReCaptcha needs to be disabled, it is likely you will also want to disable account creation, to prevent spammer attacks).

Other settings in this file include:

  • moira.server and authz.server are the servers to use for the Moira and AuthZ web services, respectively:
    moira.server=ws.mit.edu
    authz.server=authz.mapws.mit.edu
    
  • The ReCaptcha keys and domain:
    recaptcha.publicKey=XXX
    recaptcha.privateKey=XXX
    recaptcha.domain=idp.touchstonenetwork.net
    
  • uploadDir is the directory used for bulk account uploads (used by admins when migrating accounts from old systems):
    uploadDir=/usr/local/cams/uploads
    
    Create this directory, if necessary, and ensure it is writable by the tomcat user.
  • touchstoneSupportMailAddress is the email address to use for Touchstone Support links in the Cams application:
    touchstoneSupportMailAddress=touchstone-support@mit.edu
    

You must restart tomcat in order for any changes to this properties file to take effect.

  • No labels