This page is under construction

Executive Summary

Version 2 of the MIT core identity provider is based on version 2.1.x of Internet2's Shibboleth IdP package. Including the IdP software itself, the following major components are required:

In the configuration documented below, the Apache web server will listen on the following TCP ports:

The terracotta server will listen on the following TCP ports (connections should only be allowed from the peer node(s)):

The MySQL server will listen on the following TCP port:

Note that the terracotta and MySQL listeners only need to accept connections from peer servers in the cluster, so these ports should be configured accordingly in the firewall.

The following certificates/keys need to be created:

The following log files will be used:

SELinux

SELinux must run in Permissive mode. Otherwise, the Shibboleth SP Apache module will not be able to connect to the shibd socket, and mysqld will not be able to load in the shared library used by cams-ldap.

To set SELinux permissive mode at boot time, change the SELINUX setting in /etc/selinux/config:

SELINUX=permissive

To set permissive mode on the running system only:

# setenforce Permissive

Firewall

Make sure that the additional port used by the IdP are enabled in the firewall. Use the command "iptables --list -n --line-numbers" to determine the proper rule number; the following example assumes we are inserting rules beginning at number 36. Also replace 18.x.y.z with the appropriate IP address of the peer node in the cluster, not the local host.

# iptables --list -n --line-numbers
# iptables -I RH-Firewall-1-INPUT 36 -m state --state NEW -m tcp -p tcp --dport 8443 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 37 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 3306 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 38 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 9510 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 39 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 9520 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 40 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 9530 -j ACCEPT
# /etc/init.d/iptables save

Install and configure Apache httpd

We use the native Red Hat RPMs (httpd 2.2).

Install needed RPMs
Configure

Current versions of the various httpd configuration files can be obtained in the touchstone locker, in /mit/touchstone/config/idp2-cams/httpd/.

Install JDK and enhanced JCE

Install Tomcat

MySQL

We use the native Red Hat RPMs (5.0), part of the standard NIST install.

Database initialization

Start up the daemon, and secure the installation:

# /etc/init.d/mysqld start
# mysql_secure_installation

Respond to the prompts to set the root password, remove anonymous users, disallow remote root logins, and remove the test database.

Make sure the daemon starts at boot time:

# chkconfig mysqld on

We use master/slave replication, where all queries go against one MySQL master server (e.g. idp-cams-1), while the other server (e.g. idp-cams-2) operates in slave mode, i.e. with updates to the master replicated to the slave. Set up the master server first, before setting up replication.

To set up the Cams database, restore from the most recent good backup on to the master.

# mysql -u root -p < /path/to/most-recent-backup.sql

If initializing a new database for some reason, process the database schema file, a copy of which can be found in /mit/touchstone/config/idp2-cams/cams/camsdb.sql.

Grant tables

The grant tables will likely need to adjusted when moving an existing database to a new server host, e.g. if the master or slave host names are changing.

The CAMS application will use the camsusr account to access the CAMS database; the Shibboleth IdP resolver will use the shibresolver account; the database backup cron job uses the backup account; the cams-ldap daemon uses the camsldap account. Create the following accounts as needed (replace <password> with the password for that account):

# mysql
mysql> GRANT ALL ON cams.* TO 'camsusr'@'localhost' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL ON cams.* TO 'camsusr'@'idp-cams-1.mit.edu' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL ON cams.* TO 'camsusr'@'idp-cams-2.mit.edu' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT ON cams.* TO 'shibresolver'@'localhost' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT ON cams.* TO 'shibresolver'@'idp-cams-1.mit.edu' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT ON cams.* TO 'shibresolver'@'idp-cams-2.mit.edu' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT, LOCK TABLES, FILE, RELOAD ON *.* TO 'backup'@'localhost' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT ON cams.ExternalUser TO 'camsldap'@'localhost' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> quit
Replication

Make sure that connections are allowed to port 3306 from the peer server only (see firewall instructions above). For instructions on setting up the MySQL master/slave replication, see https://wikis.mit.edu/confluence/display/ISDA/MySQL+Replication+Configuration+Instructions

Maintaining the CAMS database

The backup-db script should be installed in /usr/local/cams/sbin on both the master and slave servers, and run periodically from cron. It will dump all databases to a compressed timestamped file in /usr/local/cams/backup/local, and also copy this file over to the peer server's /usr/local/cams/backup/remote directory. To set up the procedure, do the following:

On the slave server, the check-slave-status script should also be installed in /usr/local/cams/sbin; it should be run once per hour from cron. This should use a special replicatechecker account in MySQL, created as follows (this should be created on the master, after replication has been set up):

# mysql
mysql> GRANT REPLICATION CLIENT ON *.* TO 'replicatechecker'@'localhost' IDENTIFIED BY '<password>';
Query OK, 0 rows affected (0.00 sec)

mysql> quit

where <password> is replaced by the password for the replicatechecker account. Next, create /usr/local/cams/conf/replicatechecker.cnf, with the username and password for the replicatechecker account:

[client]
user=replicatechecker
password=<password>

Install the cams-slave.cron file as /etc/cron.d/cams-slave; this will run the check-slave-status script hourly.

Any problems encountered by either of these procedures will be reported via email to touchstone-support.

cams-ldap

The Cams-to-LDAP integration is done via a trigger library added to the MySQL instance, and a separate Perl daemon which propagates account changes to Moira and LDAP. Set it up as follows:

Install Shibboleth IdP

Terracotta

(See https://spaces.internet2.edu/display/SHIB2/IdPCluster)

The terracotta software can be used to cluster the IdP nodes. Note that currently the Cams IdPs are not clustered, so terracotta should not be running. Each node must run the terracotta server, as well as the instrumented client (tomcat, in our case). The terracotta server operates in either the active or passive role; only one server should be in the "active/coordinator" state at a time.

Download the terracotta tarball; our current version is in the touchstone locker, in /mit/touchstone/downloads/terracotta-x.y.z.tar.gz. Extract it under /usr/local, create a logs directory for it, make it owned by the tomcat user, and symlink /usr/local/terracotta to it. For example (replace 3.1.1 with the appropriate terracotta version number):

# cd /usr/local
# tar xzf /path/to/terracotta-3.1.1.tar.gz
# mkdir -p terracotta-3.1.1/logs
# chown -R tomcat:tomcat terracotta-3.1.1
# rm -f terracotta
# ln -s terracotta-3.1.1 terracotta

The IdP requires the installation of a couple of Terracotta Integration Modules, and the generation of a boot jar file for Tomcat, which is specific to the Java version:

# setenv TC_HOME /usr/local/terracotta-3.1.1
# setenv TC_INSTALL_DIR $TC_HOME
# setenv JAVA_HOME /usr/java/default
# $TC_HOME/bin/tim-get.sh install tim-vector 2.5.1 org.terracotta.modules
# $TC_HOME/bin/tim-get.sh install tim-tomcat-6.0 2.0.1
# $TC_HOME/bin/make-boot-jar.sh -f /usr/local/shibboleth-idp/conf/tc-config.xml
# chown -R tomcat:tomcat /usr/local/terracotta-3.1.1

Be sure to regenerate this jar after installing a new JDK.

Install the init script from /mit/touchstone/maint/shibboleth-idp/terracotta/terracotta.init in /etc/init.d, and make sure it is configured to start at boot time. Note that terracotta must be started before tomcat.

# cp /path/to/terracotta.init /etc/init.d/terracotta
# chmod 755 /etc/init.d/terracotta
# chkconfig --add terracotta

To avoid performance impact during business hours, we disable automatic garbage collection of terracotta objects. Instead, we run a nightly cron job to do the garbage collection manually. Since this should only be done on the active/coordinator node, the script, run-dgc-if-active.sh, checks the server mode, then runs the garbage collector if and only if the server is the active node. Both the script and cron file can be obtained in /mit/touchstone/maint/shibboleth-idp/terracotta/; install as follows:

# cp /path/to/run-dgc-if-active.sh /usr/local/shibboleth-idp/bin/
# cp /path/to/run-dgc.cron /etc/cron.d/run-dgc

Shibboleth SP

The CAMS application needs to authenticate against our IdPs, and so requires the Shibboleth service provider (SP) software to run, as well as the IdP software.

Installation

We use the stock RHEL 5 64-bit RPMs, available from the Internet2 downloads site; the best way to install the RPMs is to use Shibboleth's yum repository, as described in https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPLinuxRPMInstall. To configure the repository, install the repository definition file into /etc/yum.repos.d. Once the repository is configured, you can install the current RPMs, including dependencies, using yum, e.g.:

# yum install shibboleth.x86_64
Configuration

The SP configuration files live in /etc/shibboleth:

Create the directory for the native logger, and make it writable by the Apache user:

# mkdir /var/log/shibboleth/httpd
# chown apache /var/log/shibboleth/httpd

The Apache module will log to the native.log file in this directory.

Note: SELinux must be set to permissive mode in order for the SP to function properly; otherwise (without modifying policy) its Apache module will be unable to connect to shibd's Unix socket (which lives in /var/run/shibboleth/). Edit /etc/selinux/config accordingly.

Make sure the Shibboleth daemon is started at boot time:

# chkconfig shibd on

Installing the CAMS application

We will run the CAMS application in the same Tomcat container as the Shibboleth IdP. Copy the CAMS application war file, cams.war, into /usr/local/tomcat/webapps, and make sure it is owned by the tomcat user.

# cd /usr/local/tomcat/webapps
# cp /path/to/cams.war .
# chown tomcat:tomcat cams.war

Create the application's logs directory as needed:

# mkdir /usr/local/cams/logs
# chown tomcat:tomcat /usr/local/cams/logs
Create a Java keystore containing the application client certificate

The CAMS application uses an application client certificate to authenticate to the Roles web service. The subject CN of the certificate for the production server should be touchstone-cams.app.mit.edu (for the staging server, use touchstone-cams-staging.app.mit.edu). When creating the certificate, make sure that it is an Application Client Certificate, not a standard web server certificate; it should be issued by the MIT Client CA, and must be enabled for client usage.

Once you have the application client certificate, you must convert it to PKCS12 format for importing into a Java keystore. Begin by downloading the MIT Client CA certificate:

# wget 'http://ca.mit.edu/mitClient.crt'

To convert to PKCS12 format (assuming the certificate and private key PEM files are in cams-app-cert.pem and cams-app-key.pem, respectively):

# openssl pkcs12 -in cams-app-cert.pem -inkey cams-app-key.pem -export -out cams-app-cert.p12 -nodes -CAfile mitClient.crt

(Supply the export password as prompted; remember the password for use with the keystore.)

cams-app-cert.p12 now contains the certificate in PKCS12 format. To import it into a keystore, obtain a copy of the ISDA PKCS12Import.jar utility, and invoke it as follows:

# $JAVA_HOME/bin/java -jar PKCS12Import.jar cams-app-cert.p12 cams-app.jks PASSWORD

where PASSWORD is replaced by the actual password you supplied above. Install the resulting keystore file into /usr/local/cams/conf/ (create the directory if necessary). Ensure that it is owned and only readable by the tomcat user.

Next, we need to create a server trust store containing the MIT CA certificates. Begin by copying the standard Java CA certificate store from the Java distribution, e.g.:

# cp $JAVA_HOME/jre/lib/security/cacerts /usr/local/cams/conf/serverTrustStore.jks

Download the MIT CA and, if necessary, MIT Client CA certificates (you should already have downloaded the Client CA above), and import them into the trust store:

# wget 'http://ca.mit.edu/mitca.crt'
# wget 'http://ca.mit.edu/mitClient.crt'
# $JAVA_HOME/bin/keytool -import -keystore /usr/local/cams/conf/serverTrustStore.jks -alias mitca -file mitca.crt
# $JAVA_HOME/bin/keytool -import -keystore /usr/local/cams/conf/serverTrustStore.jks -alias mitClient -file mitClient.crt

The password for the server trust store is "changeit". Answer "yes" to the "Trust this certificate?" prompt.

Finally, we set system (global) properties so that the CAMS and IdP applications use these keystores, by adding the following settings to /usr/local/tomcat/conf/catalina.properties:

javax.net.ssl.keyStore=/usr/local/cams/conf/cams-app.jks
javax.net.ssl.keyStorePassword=PASSWORD
javax.net.ssl.trustStore=/usr/local/cams/conf/serverTrustStore.jks
javax.net.ssl.trustStorePassword=changeit

(Replace PASSWORD with the password you used for the application certificate key store above). Make sure that the catalina.properties file is owned and only readable by the tomcat user.

CAMS application configuration properties

The CAMS configuration.properties file should be installed in /usr/local/cams/conf/configuration.properties. It should be readable only by the tomcat user, as it contains a key for the ReCaptcha service used in account registration. Most of the settings in this file should not need to be changed, but there are two settings which may need to be used to address operational issues, allowing us to disable the creation of new accounts (except by admins), and/or to disable the ReCaptcha service (in case of a problem with the latter). Normally, these settings should be:

enable.create = 1
enable.recaptcha = 1

Change the setting to 0 and restart tomcat to disable the function. (If ReCaptcha needs to be disabled, it is likely you will also want to disable account creation, to prevent spammer attacks).

Other settings in this file include:

You must restart tomcat in order for any changes to this properties file to take effect.