GR6 - User Testing

Design

Homepage

Grading

In the paper prototype phase, we focused on testing the grading interface. We had two iterations. The first one only displays students whose grades have been entered. Whenever a new grade is entered, a new row with a blank name box will appear at the bottom of the page. The second iteration displays all names in a table. Our final design follows the second iteration, since it is more visible..

Forms (including posting a new assignment)


In the heuristic evaluation, we got many feedback about our long forms. People also mentioned that they do not know what some fields are for in the form. In the final design, we simplified our forms, and added placeholder information in many fields as a sample.

Implementation

We decided to build a Web application, so we didn’t have to worry about users installing software, but this exposed our UI to the limitations of CSS. For example, we initially designed the grade entry form as a full-screen sheet that has horizontal and vertical scroll-bars, and the first row and first column are fixed to offer context. CSS made this hard enough that we didn’t have time for it.

We built upon a pre-existing code base, which allowed us to focus on front-end improvements, but might have subconsciously biased us towards preserving existing functionality and interactions.

The site’s stylesheets are written in SCSS, and make heavy use of variables, mixins, and nested rules to reduce selector complexity, and to separate generally useful rules from page-specific and site-specific rules. The generic rules have been extracted into a Rails plug-in, so that they can be reused in other applications, and so that bug fixes and enhancements can be easily back-ported into the course management site. We preferred CSS3 features to JavaScript (for example we used CSS3 Transitions instead of the fade jQuery UI effect) so that we can benefit from hardware acceleration on desktop and mobile devices, at the expense of Internet Explorer users.

The JavaScript is driven by markup, using the HTML5 data- attributes. This approach makes it easy to keep widget behavior consistent across the different features, because the JavaScript for an interaction is in one place, and because the easiness of adding a behavior greatly reduces the danger of developer laziness. However, the developing the JavaScript behavior library had a toll on development speed, and in turn on the UI’s polish.

We make liberal use of AJAX, because our Rails application will be deployed on a 4-core server where each core can respond to most requests in less than 10ms, so overloading the server is not an issue. This will need to be revisited if the site grows to Stellar’s size.

Evaluation

Users

Briefing

We are working on a course management site, something along the lines of Stellar. Students submit psets, TAs get them, grade them, enter the grades, and then students see their grades. Today, we want to test the UI for entering grades. You are a TA, and your tasks will all revolve around posting psets and entering grades for psets.

Tasks

Observations

Reflection

Because we already had a functioning product, we thought that we could focus more on coding the front-end features that we wanted to perfect through this course. Unfortunately, we didn’t take into our risk consideration the fact that our back-end would have to change, which turned out to be a bad decision. Our back-end had to change considerably to accommodate the features that arose from the user interface design process, which took a lot of our time away.

We decided at the very beginning to focus on a few core features: assignment creation, grading, and submitting. We wanted to focus our attention on this small set of very important features to perfect the user interaction with them. Having said that, we feel happy with the way the features turned out, and plan to work on the UIs of the other features, as this website will be in use next semester in 6.006.

The feedback from the paper prototype, from HW2, and from Anh has helped us tremendously, although there is a lot of overlap among the different feedback. This overlap helped us decide which problems to tackle first, because multiple people complaining about the same thing means that it’s a problem that is noticeable.

We evaluated the results through this GR6 and through our own interactions with the final product. We learned that we still need to tweak a few small things from the user studies we conducted for GR6, like, for example, allowing the TA to enter grades from the “Teacher” menu as well as the “Grading” menu. We feel that the product is leaps and bounds ahead of our initial prototype, and that it is ready to be used in production with an MIT algorithms class. The class will provide a lot more feedback for the next iteration of the project.