Retrenching the FY07 version of the IS&T Measures Initiative.

  • Framework for Performance Measures vs Volume or Capability Measures -- this is a placeholder for work to do in deciding how a service could be cast into a form that could be evalualted for its performance, versus just analyzed for its capacity.  See the CMU SEI site at http://www.sei.cmu.edu/str/taxonomies/view_qm.html

Jerry's Apparent Position after v2 of Q4 FY06 

  1. The measures that we have on that page that actually look like performance measures to Jerry as they are include:
    1. Timeliness of Helpdesk Consulting. (#1)
    2. Uptime of IS&T Servers (#7) [but state our goal (in text, since the goal line is shown in the graph)][done] -- Jerry'd like to see the count of servers that contribute to this number (that much uptime is harder for a large pool of servers than for just one)
    3. Timeliness of CD Distributions (#4) (but he wonders if it's important enough -- how much volume do we still do in CDs?  Rob would say that several key distributions are only done by CD and those customers would care)
  2. Capability Measures that could be performance since they have stated goals
    1. Collocation utilization, If we have a marketing program and our goal is to increase utilization by x% per year 
      (Rob gets the impression Jerry doesn't think that the condition is met; Joanne thinks this too.  Rob thinks that offering the service at all has an implicit assumption that someone, call them early adopters, will want to use it -- but maybe we can't call that a quantifiable goal in a PM sense.)
    2. TSM utilitization, If we have a marketing program and our goal is to increase utilization by x% per year 
      (Rob gets the impression Jerry doesn't think that the condition is met; Joanne thinks this too.  Rob thinks that offering the service at all has an implicit assumption that someone, call them early adopters, will want to use it -- but maybe we can't call that a quantifiable goal in a PM sense.  I'd say we offer TSM as the Recommended product in this line of business -- that's our marketing.  80-100% of market share must be our goal, or we'd be recommending something else, like external hard drives or CD-RW.)
    3. Copyright Infringement (#10), looks like a volume measure but Rob said it had a goal of reduction compared to the same time a year before.  Rob would argue that ifyou're going to have an awareness campaign about infringement, as we do, then we need to set  a goal of "some sort of response to the campaign", i.e., make the trend start to go down, as a goal. )
  3. Good Capability Measures to keep as is
    1. Spam email # (add % of total email volume) (#2)
    2. Virus email # (add % of total email volume)  (#3)
    3. Web Self-Service software distributions. (#5)
    4. Collocation (#6), if we don't establish it as a performance measure (see above)
    5. TSM Utilitiation (#8), if we don't establish it as a performance measure (see above)
    6. Network Security incidents (#9)
    7. Copyright infringement, (if we don't accept "establish a downward trend" as a legitimate goal.)
    8. MIT Mailboxes using SpamScreen Auto-filtering.  (#11). 
    9. Wireless Clients, unique users per day  (#12)
    10. On-campus Utilization of IS&T Self-Help web pages  (#13)
  4. IDEAS for performance measures not reported yet
    1. The degree of shift in software distribution from CD to Web download.  Count of product lines offered in each, ratio of CD to Web.  Or Count of licenses distributed -- this is a little dodgy because a single "order" in a CD ticket could be mulitple CDs and licenses, while the N of Web doanloads seems to include a lot of repeated-downloads per license (given that you no CD to draw upon if you need to do a reinstall, say).
      We know we'll have a shift in the ratio for FY07 as Filemaker becomes available over download (with CD/DVD manufacturable if you really need one). 
      Reflects Theresa's very clear goal of encouraging 24x7 access worldwide to software when you need it.

Joanne's thoughts

  • Most measures by far that OIS can offer seem to be Volume measures, and of interest mainly to its managers in deciding when their systems might become overloaded and fail, or need to become larger by adding servers, say.   The services are established without particular goals for performance other than to go as fast as possible for everyone who wants to use them.  If there actually are goals in the minds of the managers, they are kept private.
  • When we do think of performance measures in OIS, the most popular one is Uptime,
  • Who is the audience for these -- non-MIT?  Senior Leadership?  DLC administrators buying IT services? IT-Partners making recommendations or establishing computing environments for their charges?  Students?  Parents?  
    (Rob would argue that it's all of them, at different times)

Rob's thoughts

OIS client-facing measures are most likely in response to these questions:

  • is this a service I want?  (meets a need, i can afford it) -- CMU SEI QM 1.1 (Needs Satisfaction -- Effectiveness)
  • is the service always there when I need it -- QM 2.1  (Dependability --> Availability/Robustness)
  • the service performs well when I try to use it.    QM 2..2  (Efficiency / Resource Util.) and QM 2.1.2 (Reliability) maybe
  • No labels