Design

The overall design of our interface (as shown in screenshots below) was meant to be a clean, simple application for an enterprise audience. We spent a great deal of time researching enterprise design patterns, and found that using grays and one highlighting color (red in our case) made for a professional looking application. High level navigation between interfaces was done using a horizontal tab system, familiar to many corporate sites. Each tab is generally self-contained in terms of functionality, however the underlying database on trips is shared across tabs. As the user navigated, we wanted to maintain state in each tab independently because some users may be using multiple interfaces at once (i.e. a manager who is approving trips for subordinates while planning a trip themselves).

During the initial development, we decided to restrict our "canvas" to a constant size across tabs for both consistency and re-sizing purposes.

Screenshot

Design


The Create Trip interface was designed to optimize learnability, yet be very powerful and was inspired by Google's homepage. Originally, we had multiple lines of input fields arranged vertically, but paper prototyping revealed that a simpler version with just three boxes would be much better in terms of learnability, leading to a major design overhaul. The page itself used a high level table structure to make sure most elements aligned, with the intentional exception of the "Save Trip" and "Submit Trip" buttons, which were center justified to stand out.  

However, heuristic design revealed that we may have gone too far - with just one line there was confusion around whether the address was a starting point or destination. This was alleviated somewhat through better labeling and the ability for the user to CRUD the ordering of entered locations using direct manipulation, but the initial user confusion upon entering the interface remained up through user testing. We considered using two lines, as a heuristic evaluator suggested, but we wanted to see if we could make the one line work and the verdict is "sort of".

Our general design approach was to have a lot of artificial intelligence on the back end aiding the user. The primary function would be to optimally order legs for the user, and the ability for the user to use direct manipulation to make adjustments to the suggestion (a la Google). We didn't quite get to implement the back end completely, which somewhat compromised user testing. Other aspects of this design pattern were implemented, however, via "smart" error prevention techniques such as auto-complete, the calendar widget, and enabling/disabling the "add" button until all forms were filled out.


The Approve Trip interface was designed around efficiency (while still maintaining CRUD) because from our interviews a manager spent very little time, usually on Fridays, approving expenses.  Thus, the "front page" is designed for the user to quickly scan high level details of the trip (who, what, where, when, how much $$) and approve/reject from this page alone. This functionality did not change much from the paper prototyping beyond minor changes to labeling. During heuristic evaluation, an expert recommended having a "select all" widget, which made a lot of sense considering the efficiency goal. 

During user testing, an actual sales manager found this interface very easy to use. In general, this was a highly successful design pattern.

The Analyze Trip interface was designed to maximize user control. We found during interviews that auditors used a lot of creativity in their analyses, and that their tasks were rarely "standardized", thus we wanted the user to be able to jump around and dig into details on a variety of dimensions. In this same vein, we included an option for the user to export all data to excel in tabular format. We looked to mint.com as inspiration for this interface.

Input from paper prototyping this interface led us to set up a vertical tab system on the left which created a visible mode state for the user at all times. It also led us to restrict the opportunity for clicking - originally we thought that users might want to click on graphs themselves, but we found that users preferred a consistent navigation interface over a more flexible one. We found heuristic evaluation focused mainly on issues such as alignment and hue, which were incorporated into the final design.

Implementation

The web application was implemented using a mixture of HTML, CSS, JavaScript and JQuery (1.5.2, ui version 1.8.11). HTML and CSS were used to regulate the static design framework of the application, including tables, forms, fonts, colors and size of elements.

JavaScript was used extensively to respond to dynamic user input. In general, we would change the HTML using Javascript, but only within certain elements.

JQuery was used mainly for direct manipulation features such as the drag and drop functionality in Create Trip and the date slider in Analyze Trip. We used local repositories of all code, rather than linking to web repositories.

In terms of code structure, each tab was self-contained, which allowed us as a team to easily work collaboratively.

Where relevant, we relied on open-source Javascript and jQuery plugins. These plugins included:

  • The calendar widget (javascript)
  • The date slider (jQuery)
  • Table Drag n' Drop (jQuery)
  • The tabs (jQuery)
  • The data graphing functionality (jQuery) 

We intended on also using the Google API for the calendar and map of the Create Trip interface. However, it was found that the Calendar API was far less robust than assumed. For example, we intended on allowing the user to drag and drop meetings using the calendar, just like in Google Calendar. However, it was discovered after the Computer Prototype stage that the Google calendar API only allows for the user to view an embedded calendar on a webpage - changing things has to be done via submitting things to the Google servers via forms and such, and there is no direct calendar manipulation ability. As a result, we soured on the Google API, and we just never implemented the calendar or map. This definitely affected user testing, because we effectively planned on relying on Google to be the "brains" behind our artificial intelligence - which in turn was supposed to make the interface much more learnable and powerful.

If we were to continue development, we would probably have to implement a drag and drop calendar ourselves, and write a ton of javascript to basically take the place of Google in terms of intelligently optimizing the user's trip. We think the Google Map API is still usable for display. 

In terms of jQuery plugins, we experienced some problems around different plugins using older versions of jQuery and conflicting with other plugins. However, this is a problem with open-source code in general, and we were able to work around them adequately.

Evaluation

Two user tests were done in person and one was done over the phone. Each test only involved one developer, who acted as both facilitator and observer. The users included a former salesperson, a sales manager, and a corporate auditor. Two of the three (the auditor and sales manager) were previously interviewed, so they were already somewhat familiar with the project.

After a very brief introduction to the purpose of the application, all three were given the same set of tasks (written on a sheet of paper) to perform, including:

  1. Creating a trip with three legs
  2. Deleting a personal trip from "My Trips"
  3. Approving a trip
  4. Rejecting a trip after detailed investigation
  5. Investigating IT reimbursement data
  6. Finding out who approves trips in the Sales Department
  7. Changing the date range on graph.

We did not include a demo, because we felt that the design should be easily learnable off the bat.

During user evaluation, we were primarily interested in the user's reactions during the test, and we were looking for critical incidents.

Position

Critical Incidents

Design Changes

Salesperson

Create Trip
- "What am I supposed to do?": couldn't click on "add"
- Confusion between arrival and destination
- Don't understand difference between "add", "save" and "submit"
Approve Trip
- Easily accomplished tasks
Analyze Trip
- "I understand what is being shown"
- "The graph is messy"
- Took them a minute to find date slider - noted gray color blended into background

Create Trip
- Textbox helper text appeared in black text (rather than gray) in firefox. Test on more browsers than just Chrome
- Abandon one line experiment - add multiple lines as previously discussed to make user input one leg by default.
- More descriptive terms versus "save", "add" and "submit"
Approve Trip
- None
Analyze Trip
- Clean up graphs 
- Change contrast/hue for date slider to appear more apparent.

Sales Manager

Create Trip
- Confusion between save and submit
- Did not notice mileage
Approve Trip
- None
Analyze Trip
- Did not notice date range on slider

Create Trip
- See above
- Add box that has total mileage for trip
Approve Trip
- None
Analyze Trip
- Include more labeling on slider

Auditor

Create Trip
- None
Approve Trip 
- None
Analyze Trip
- "Very useful interface to compare histories"
- "Very useful when going through employees"
- "Time bar was confusing"

Create Trip
- None
Approve Trip
- None
Analyze Trip
- Improve usability of slider using above comments

Reflection

Overall, our team was happy with the results of our project. It did not change a lot from the initial planning stages.

GR1 - Project Proposal and Analysis

We think that this part of the design process was probably the most critical to our eventual success. Factors that worked well for us were:

  • Task Analysis: Limiting the scope of the project so that we could accomplish what we needed to do in three months.
  • User Analysis: spending a lot of time finding real representative samples of our user population to interview in depth. This really set the design criteria and guided decisions for the rest of the project. We referred back to the notes gathered from the interviews quite a bit over the following months.

Parts that probably could have been done better:

  • Domain Analysis: We could have done a better job thinking about multiplicities. For instance, we missed the entity "legs of journey" which would have maybe caused us to devote a little more room to this in the final design.

GR2 - Designs

We split up and each group member came in with two options for each task. That way, we had a total of 6 designs for each task. The design meeting was thus spent discussing the pros and cons of each design. In the end, the final design that emerged from this process was a blend of designs that incorporated aspects from all three people. In general, one design "won" for each task, but we found that small embellishments were blended in from other designs for support.

GR3 - Paper Prototyping

This step was critical in improving the usability of our interface. In particular, the Create Trip and Analyze Trip interfaces underwent major redesigns as a result of this process and became a lot more efficient, learnable, and visually appealing. We had some difficulties in terms of the process of the user test itself - we found that our briefing was far too detailed to get really good feedback from users. In the end, we found that the briefing/task list should be a balance between guidance and freedom, and that a bad briefing/task list could severely compromise the effectiveness of the test.

GR4 - Computer Prototyping

During our computer prototyping some things went well and others eventually caused the project to suffer.

Things that went well were:

  • Focusing a lot on the colors, fonts, and general layout. This ended up creating an appealing, generally learnable interface for people to use.
  • Focusing on navigation and visibility of mode and state. 

Things that caused headaches:

  • Not researching the usability of the Google Calendar and Maps API at all because we weren't focused on the back-end. This led to implementation problems.
  • Lack of sufficient attention to multiplicities. This caused our interface to feel shallow and unresponsive to extreme, but still reasonable, conditions (such as 10 legs in a driving trip).

GR5 - Implementation

In the end, we incorporated most of the "major" design flaws exposed during the heuristic evaluation. We did find some irrational attachment to the design in the group after the initial prototype stage, especially to the one-line interface in Create Trip.

We never got to the map and calendar portion of Create Trip, in either the prototype stage or implemeentation which probably would have revealed a whole new set of usability problems. In essence, by putting out an incomplete prototype, we were unable to get feedback on all the "moving parts" of our interface. We can easily see this being a tension during an iterative design process, where some aspects are tested when they are half-baked and you just hope that you don't have to scrap the whole design if feedback comes back negative.

We could have done better in terms of version control amongst group members, but this was somewhat mitigated by our modular design. Each member essentially handled one tab. Overall design decisions were made easier by our early focus on the user interviews.

GR6 - User Testing

The "final" user testing was somewhat difficult due to the fact that the interface was not completely implemented according to our original design. However, we did find that giving the application to "real" people to use revealed flaws that were not discovered during heuristic evaluation of experts (for instance the usability of the slider). On the other hand, we learned from our paper prototyping how to better brief the users, which led to better feedback.

Summary

Overall, we found the iterative design process to be very useful and effective. Involving the user at the very beginning was extremely helpful. The paper prototyping was also a great tool - we would not have been able to redesign the interface to the extent that we did had we jumped straight into coding. We found tension between the computer "prototyping" and "implementation". Inevitably, those elements that are left shallow in the prototype stage are the ones that typically have usability problems in the end. At the same time, spending too much time on certain aspects of the prototype led to reluctance to change. If we were managing a group developing applications in the future, that would be the area we would focus on in terms of project leadership.

  • No labels

1 Comment

  1. - In implementation, when you found out about difficulty of implementing the calendar, at least you should make some understandable fake one. The calendar in your prototype wasn't sufficient in showing users what it really means and what they can do with it.

    - For user feedback, you missed judging severity for all issues.

    - From reflection, it's great finding out that shorter briefing to test users can extract more useful feedback from users!