Design
Implementation
Evaluation
Reflection (or lessons learned)
Give users as much rope as possible - but don't let them hang themselves.
We found that users do not always know what is best for themselves. Our design once included groups, which were displayed with user-chosen colors. Users could chose any color from a color swatch - their freedom was enormous! We imagined users carefully picking colors that were easy to distinguish.
Instead, this happened:
As at least two of our heuristic evaluations commented, some color choices make the name of the company incredibly difficult to read. Our testers weren't employing the careful consideration we'd imagined, and our interface lost usability.
We know now that a better way to implement the colors would have been to give our users more help in not making this sort of mistake - perhaps automatically adjusting the font color, or restricting the colors to those that make a black font readable, or showing the user a preview. User freedom takes more planning than we thought!
The more testing, the better.
We noticed that our testers varied dramatically in their ability to interact with the interface. Some sailed through our tasks like they'd been using the interface for months; others made many mistakes. Similarly some of our evaluators were alone in disliking certain aspects of our interface strongly. While some individuals raised very helpful points on their own, we found that most of our major changes came about as a result of noticing trends in the way people interacted with our interface.
Started with too many features - some not requested by any user
Each iteration brings up new problems
Too many pages – lots of forms
More focus on frontend/design on first round of computer prototype
Didn't think that much about low resolutions
Paper prototype doesn't highlight certain problems: icon size, visibility etc.
Hard to make links visible enough
Unsaved Data?