PosterBoard - GR6 - User Testing

Manasi Vartak, Tristan Naumann, Chidube Ezeozue

Design

Final Design

Device: We decided to use a SmartBoard for our implementation instead of a web or mobile interface because user studies indicated that users would not go out of their way to get information about events through a website or mobile app. In fact, the events.mit.edu website which is a central repository for events, is hardly used by students. We envision SmartBoards (or a similar devices) being placed at the current locations of poster boards in Stata. We also chose to allow adding of posters through the smart board as opposed to having a separate web interface for simplicity and uniformity of the interface. 

In our final design, we implemented two views to organize posters on the PosterBoard, one view to add a poster and a detailed view to interact with the selected poster. We decided against the "Random view" for viewing posters because it did not aid users in finding relevant information.

Design Rationale

In the course of our design, we made changes driven by comments in our paper prototype and heuristic evaluation. Our user testing also threw some insight on possible future improvements.

Paper Prototype 

Heuristic evaluation

Implementation

We implemented the project using a SmartBoard optical touchscreen that plugged into any computer's serial/USB ports for input and VGA port for output. To minimize dependency on the SmartBoard, we implemented the project as a website and projected it using the SmartBoard. The backend was implemented using Python using the Django web application framework and SQLite database. We also wrote a standalone websockets server for feeding the inputs from the RFID card reader to the frontend. The frontend was implemented using HTML, CSS and Javascript with a heavy reliance on the JQuery and JQuery UI libraries as well as a few other libraries.

The dialog for adding posters was implemented using JQuery UI Dialog; using Calendrical and Chosen javascript libraries for selecting dates/times and tags respectively. 

The Calendar View was implemented using a grid built out of HTML tables and the Similar View was implemented using absolutely positioned image elements.

Focusing of the posters was implemented using Colorbox javascript library and the scribbling was implemented by using SVGs with the Raphael javascript library. Additional libraries were used for displaying the color picker and old sketches.

When storing the sketches, we opted to store the coordinates as fractions of the height and width as sketch time as opposed to absolute coordinates to permit scaling at display time.

We also contemplated displaying old sketches as stacks underneath the main poster but the colorbox javascript library was not flexible enough to accommodate this. Retrospectively, our approach gives more visibility to the old posters that a stacked arrangement would have.

We also contemplated saving the sketches as the user sketched instead of waiting till the user was done sketching but that approach was more bug-prone and given the time limitations, we opted for the less bug-prone approach of saving when the user clicked the Save button.

** Need to talk about how we used RFID reader to remove need for typing **

- the actual board was not functional until the week before GR5 was due. It was missing a crucial connecting cable to allow the smart board to communicate with the computer. After several tens of hours of conversations with Customer Service and three sets of incorrect cables, we finally got the board working. So resolution, fit to big screen don't happen very well.

It is also important to point out that the board's touchscreen was not as responsive as we hoped and this impacted the usability of the system somewhat. Another board-specific implementation detail is that the board does not support multi-touch so we abandoned the idea of supporting multiple users working with the board simultaneously.

- rfid. we had to explore several ways to get rfid working with javascript and enable it to talk to the django server. we had to make three prototypes, two of which failed before getting a viable one. the working prototype came together the week before GR5, limiting our ability to take further advantage of it.

Evaluation

The ideal user test would have been conducted by moving the board to a part of the building with high human traffic and the test conducted with people who were attracted to the board in the first place. We however did not have that luxury because the board was fixed in place at the group space of a research group. We however managed to conduct a test that was very revealing and helpful.

Choice of users

We were looking for users that were representative a the bulk of the population in MIT: students and we had 3 users: 1 female postdoc who recently completed her PhD in New York, 1 male PhD student working in CSAIL and **Tristan fill in here**. In order not to focus solely on students, we performed a demo of the project for a professor as well and got some helpful comments from him.

User preparation

We utilized our briefings and tasks (outlined below) from the paper prototyping exercise with one slight modification.

Briefing

PosterBoard is a project that aims to increase visibility of event posters by encouraging interaction with the posters.

What you are looking at is an electronic poster board that will be installed up in a public place like the ground floor of the Stata center.

Scenario Tasks

Task 1: You have come across this poster board in Stata Center. Describe 5 things you can do with it.  Interact with it for two minutes. (We utilized this open ended task to get a sense for which features of the poster board were discoverable as well as those that might be expected).

Task 2: You have a USB drive in your possession. Add a poster from the USB onto the poster board.

Task 3: Find a poster you like, add it to your calendar and scribble on it.

Observations

User Testing. If we were going to make further changes, we would make the following changes based on user testing feedback.

Reflection

Risk assessment: We under-estimated the risk involved due to hardware. Specifically, the project turned out to be much more hardware focused than we expected since our application was heavily dependent and limited by the SmartBoard and RFID equipment. As mentioned above, the SmartBoard was not fully functional until the last week of classes. As a result, we could not optimize out functionality for the SmartBoard resolution and size. Further, since we wanted to completely remove the need for typing, it was essential that we have an alternate mechanism of getting user information. We used an RFID reader for this purpose. However, this again introduced a hardware dependency and we had to focus on getting the RFID reader to work with our server. Since we got a working RFID prototype ready in the last few weeks, extensions to the functionality using this capability were severely limited. In particular, we wanted to implement personalization of the PosterBoard based on the time a user last visited the board and preferences for events. However, we cut this feature due to time constraints. This feature would also have introduced a new "personalized" mode in the system that we felt would confuse the user and could open up safety issues like another user using the previous user's information to post posters or information. It would also have been beneficial to have backup plans in case the hardware didn't go through. We had partially mitigated this risk by implementing the project as a webpage but we could have explored other alternatives.

User testing: Testing the prototypes without the SmartBoard made it difficult for us to judge how interaction with the board will differ form interaction on a computer. The environment in which we tested the prototype was also different from the actual environment. Since we briefed users about the project, its purpose and high-level functionality. However, this did not allow us to test the discoverability of the PosterBoard purpose and functionality. Moreover, we were unable to observe how users would interact with the touch interface since heuristic evaluation was done using a mouse and keyboard.

- prototyped features were a good set. More fully prototype. Should have put paper prototype on a vertical board and let users interact with it. Prototype at scale.

- evaluating results of observation: each user test had new people, so each time we found new problems but no feedback on whether the old problems had been fixed

- implementation details that caused problems: saving scribbles was really time consuming which caused a bug where detailed posters did not appear properly

- focus less on the functionality and more on the interface