This page is under construction

Executive Summary

Version 2 of the MIT core identity provider is based on version 2.1.x of Internet2's Shibboleth IdP package. Including the IdP software itself, the following major components are required:

In the configuration documented below, the Apache web server will listen on the following TCP ports:

The terracotta server will listen on the following TCP ports:

The following need to be created for use by Kerberos and SSL:

The following log files will be used:

Install and configure Apache httpd

Install needed RPMs
Configure

Current versions of the various httpd configuration files can be obtained in the touchstone locker, in /mit/touchstone/config/idp2-core/httpd/.

Add it as a provider in in the JRE's lib/security/java.security, e.g.:

security.provider.9=org.bouncycastle.jce.provider.BouncyCastleProvider

(Replace 9 with the next sequential provider number as needed).

An already-updated cacerts store is available in the touchstone locker, in /mit/touchstone/config/java.

Install Tomcat

Install Shibboleth IdP

Terracotta

(See https://wiki.shibboleth.net/confluence/display/SHIB2/IdPCluster)

The terracotta software is used to cluster the IdP nodes. Each node must run the terracotta server, as well as the instrumented client (tomcat, in our case). The terracotta server operates in either the active or passive role; only one server should be in the "active/coordinator" state at a time.

Download the terracotta tarball; our current version is in the touchstone locker, in /mit/touchstone/downloads/terracotta-x.y.z.tar.gz. Extract it under /usr/local, create a logs directory for it, make it owned by the tomcat user, and symlink /usr/local/terracotta to it. For example (replace 3.1.1 with the appropriate terracotta version number):

# cd /usr/local
# tar xzf /path/to/terracotta-3.1.1.tar.gz
# mkdir -p terracotta-3.1.1/logs
# chown -R tomcat:tomcat terracotta-3.1.1
# rm -f terracotta
# ln -s terracotta-3.1.1 terracotta

The IdP requires the installation of a couple of Terracotta Integration Modules, and the generation of a boot jar file for Tomcat, which is specific to the Java version:

# setenv TC_HOME /usr/local/terracotta-3.1.1
# setenv TC_INSTALL_DIR $TC_HOME
# setenv JAVA_HOME /usr/java/default
# $TC_HOME/bin/tim-get.sh install tim-vector 2.5.1 org.terracotta.modules
# $TC_HOME/bin/tim-get.sh install tim-tomcat-6.0 2.0.1
# $TC_HOME/bin/make-boot-jar.sh -f /usr/local/shibboleth-idp/conf/tc-config.xml
# chown -R tomcat:tomcat /usr/local/terracotta-3.1.1

Be sure to regenerate this jar after installing a new JDK.

Install the init script from /mit/touchstone/maint/shibboleth-idp/terracotta/terracotta.init in /etc/init.d, and make sure it is configured to start at boot time. Note that terracotta must be started before tomcat.

# cp /path/to/terracotta.init /etc/init.d/terracotta
# chmod 755 /etc/init.d/terracotta
# chkconfig --add terracotta

To avoid performance impact during business hours, we disable automatic garbage collection of terracotta objects. Instead, we run a nightly cron job to do the garbage collection manually. Since this should only be done on the active/coordinator node, the script, run-dgc-if-active.sh, checks the server mode, then runs the garbage collector if and only if the server is the active node. Both the script and cron file can be obtained in /mit/touchstone/maint/shibboleth-idp/terracotta/; install as follows:

# cp /path/to/run-dgc-if-active.sh /usr/local/shibboleth-idp/bin/
# cp /path/to/run-dgc.cron /etc/cron.d/run-dgc

For more information on maintaining the IdP cluster, see https://wikis.mit.edu/confluence/display/TOUCHSTONE/Maintaining+the+terracotta+cluster+on+the+IdPs

Targeted ID MySQL database

The core IdP uses a custom implementation supporting the generation of targeted (or persistent) IDs, backed by a MySQL database. We use the native Red Hat RPMs (5.0), part of the standard NIST install; the MySQL-python RPM is required for the synchronization daemon and supporting scripts.

MySQL initialization

Start up the daemon, and secure the installation:

# /etc/init.d/mysqld start
# mysql_secure_installation

Respond to the prompts to set the root password, remove anonymous users, disallow remote root logins, and remove the test database.

Make sure the daemon starts at boot time:

# chkconfig mysqld on

Make sure that you set a firewall rule which allows the peer IdP node to connect to the daemon (on TCP port 3306).

Create database users

Create the shib and (optionally) shibadmin database users, e.g.:

# mysql -u root -p
Enter password: [Supply the root password created above]

mysql> CREATE USER 'shib'@'localhost' IDENTIFIED BY 'PASSWORD';
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE USER 'shib'@'idp-1.mit.edu' IDENTIFIED BY 'PASSWORD';
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE USER 'shib'@'idp-2.mit.edu' IDENTIFIED BY 'PASSWORD';
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE USER 'shibadmin'@'localhost' IDENTIFIED BY 'ADMINPASSWORD';
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE USER 'shibadmin'@'idp-1.mit.edu' IDENTIFIED BY 'ADMINPASSWORD';
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE USER 'shibadmin'@'idp-2.mit.edu' IDENTIFIED BY 'ADMINPASSWORD';
Query OK, 0 rows affected (0.00 sec)

Replace PASSWORD and ADMINPASSWORD with the passwords for the shib and shibadmin users, respectively. The shib user will be used by the targeted ID software to access the database. The shibadmin user can be used as an alternative to root to initialize, update, or backup the database. Note the shib password will need to be set in the idp attribute-resolver.xml file, as well as in the MySQL defaults file tid.cnf (see below). If the shibadmin account is used for the database backup, its password will need to be set in the admin.cnf MySQL defaults file.

Install the Targeted ID software

Install the scripts used to maintain the targeted ID database. This includes the tid-syncd daemon, which is used to synchronize the database between the machines in the cluster, an init script for the daemon, and a database backup script (run out of cron). The software is installed from the source tarball in /mit/touchstone/src/targeted-id-source.tgz.

# mkdir /tmp/targeted-id
# cd /tmp/targeted-id
# tar xzf /path/to/targeted-id-source.tgz
# make install

Create /usr/local/targeted-id/etc/tid.cnf, if necessary, and set the password for the shib database user (from above); you can copy tid.cnf.example in that directory, and simply set the password accordingly. The file should be readable only by root. Also create /usr/local/targeted-id/etc/admin.cnf, if necessary, which is used by the database backup script; the shibadmin MySQL user account can be used for the backup.

Initialize the Targeted ID database

You must either set up a new (empty) database, or initialize the database from a backup (e.g. a backup created on the peer system).

Initialize a new database

WARNING: THIS STEP WILL DROP THE TABLES OF ANY EXISTING DATABASE. Proceed to the next step to initialize the database from a backup.

To create a new (empty) database, process the schema file, e.g.:

# mysql -u root -p < /usr/local/targeted-id/etc/tid-init.sql
Initialize a Targeted ID database from backup

Perform this step to set up the database when adding a new machine to an existing cluster. You should obtain the backup from an existing machine in the cluster.

# mysql -u root -p < /path/to/most-recent-backup.sql
Load the targeted ID stored function into the database

The IdP resolver will call a stored function tid to generate and retrieve the targeted ID as the attribute source. The source for this function is in /usr/local/targeted-id/etc/tid.sql. Note that the function uses 2 hard-coded "secret" strings to randomize the generated IDs. It is imperative that you set these 2 strings in the function source before loading it, and that all nodes in a cluster always use the same 2 secret strings. To set the secret strings, make a copy of tid.sql (to, say, tid.sql.private), edit the copy, locate the declarations of mySecret1 and mySecret2, and replace the secret_1 and secret_2 string literals accordingly.

    DECLARE mySecret1 VARCHAR(255) DEFAULT 'secret_1';
    DECLARE mySecret2 VARCHAR(255) DEFAULT 'secret_2';

Make sure the resulting file is only readable by root. Once you have correctly set these strings, you can load the function as follows:

# mysql -u root -p targetedID < /usr/local/targeted-id/etc/tid.sql.private
Set up the grant tables

Once you have loaded the database tables and stored function, you must set up the grant tables for the shib and shibadmin database users. The file /usr/local/targeted-id/etc/tid-grants.sql contains the necessary grants for users on the local machine; you should modify this to add the same grants for the users on the peer machine.

# mysql -u root -p < /usr/local/targeted-id/etc/tid-grants.sql
Set up the tid-syncd daemon

In all but exceptional cases, a generated ID will be the consistent result of hashing the username, provider ID, and secret strings, so the ID for a (user, SP) pair will be the same, no matter which node in a cluster it is generated on. This may not be true, though, in the (highly unlikely) case of a hash collision, or if an ID needs to be revoked. To make sure that the databases in a cluster remain in synch, therefore, we employ a tid-syncd daemon which propagates new IDs to the peer(s). This daemon should run on all nodes, and be started at boot time. The daemon logs at the LOCAL5 syslog facility, so /etc/syslog.conf should be adjusted accordingly:

# targeted ID sync daemon
local5.*                                       /var/log/tid-syncd

Start the daemon, and make sure it is started at boot time:

# /etc/init.d/tid-sync start
# chkconfig --add tid-sync

Entitlements MySQL database

Beginning in the fall of 2012, the IdP attribute resolver can generate an eduPersonEntitlement attribute dynamically for a provider, based on parameters retrieved from a local MySQL database. The following steps for initializing this database assumes that MySQL has already been initialized, and the local targeted ID database has already been set up (see above).

Initialize the entitlements database
# cd /usr/local/shibboleth-idp/conf/entitlements
# mysql --defaults-extra-file=/usr/local/targeted-id/etc/root.cnf < entitlements.sql
Load the stored procedure into the database
# cd /usr/local/shibboleth-idp/conf/entitlements
# mysql --defaults-extra-file=/usr/local/targeted-id/etc/root.cnf < select_SP_params.sql
Set up the grant tables
# cd /usr/local/shibboleth-idp/conf/entitlements
# mysql --defaults-extra-file=/usr/local/targeted-id/etc/root.cnf < grants.sql

Firewall

Make sure that the additional port used by the IdP are enabled in the firewall. Use the command "iptables --list -n --line-numbers" to determine the proper rule number; the following example assumes we are inserting rules beginning at number 36. Also replace 18.x.y.z with the appropriate IP address of the peer node in the cluster, not the local host.

# iptables --list -n --line-numbers
# iptables -I RH-Firewall-1-INPUT 36 -m state --state NEW -m tcp -p tcp --dport 8443 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 37 -m state --state NEW -m tcp -p tcp --dport 8444 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 38 -m state --state NEW -m tcp -p tcp --dport 446 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 39 -m state --state NEW -m tcp -p tcp --dport 447 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 40 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 9510 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 41 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 9520 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 42 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 9530 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 43 -m state --state NEW -m tcp -p tcp -s 18.x.y.z --dport 3306 -j ACCEPT
# /etc/init.d/iptables save