You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

Design

  • Describe the final design of your interface. Illustrate with screenshots. Point out important design decisions and discuss the design alternatives that you considered. Particularly, discuss design decisions that were motivated by the three evaluations you did (paper prototyping, heuristic evaluation, and user testing).

The final design of the website was extremely focussed on the bulletin board, allowing users to efficiently add Buckets and tasks, post notes to their board / arrange them however they want, and easily see what tasks they need to do, all from one main page.

BucketList does not have multiple pages to navigate through, the user remains on the main bulletin board screen at all times, where they can accomplish all key tasks. One important reason we chose this design instead of having multiple pages to navigate and forms to fill out, is because it makes the application very efficient and users can easily and quickly accomplish all key functions from one place. This decision was made based on initial user surveys (see GR1 for more details), since most users 


Figure 1: Main Screenshot of BucketList

When

         

Figure 2: Login

In our initial paper prototypes, there was a much smaller focus on the bulletin board. In addition, each Bucket had a different board (instead of having one main board with all the user's info). We got lots of feedback regarding the bulletin board (see GR3 for details), and the consensus seemed to be that users liked the bulletin board but didn't like how we were implementing it. We tried two different designs of the bulletin board (splitting the screen with an "all notes" and the bulletin board, and having tabs to go between "all notes" and the board) for the paper prototype, and took all the comments into account when designing the website for GR4. Based on feedback we decided to switch to the 'one main board' approach and we split the screen into 3 sections (side list of buckets, bulletin board on top, and list of notes / tasks on the bottom). After coding this, we did another round of user testing before GR4 was due. Many users commented that they wanted more room for the bulletin board and thought that always displaying the notes / tasks in the bottom and list of buckets on the side was a waste of space. Based on this feedback, we redesigned to look more like the website in the screenshot above.

            
Figure 2: Drop-down Menu to Sort Tasks

Before the heuristic evaluation we only had the 2 right drop-downs in Figure 2, 'My Buckets' and 'My Alerts', to let users sort through their tasks. Based on feedback from the heuristic evaluation, we added the option to sort by due-date. These drop-down menus can easily be opened and closed, and do not take up unnecessary room on the bulletin board (as the list of tasks in the initial design did).

Another comment in heuristic evaluation was that users wanted to be able to open multiple pieces of paper at once, as depicted in Figure 3. Thus, if a user clicks a task or bucket from a drop-down menu in the final implementation, it opens a new paper. This makes the action more obvious, and lets users view multiple tasks at once, if desired. Its also easy to close these papers, with the little 'x' in the corner. 

   

Figure 3: Multiple PapersFigure 4: Zoom-in List of Tasks on a 'Bucket Paper'

Based on the Heuristic evaluation and another round of user testing, we also changed the layout and information presented on the paper. Specifically, we added the edit and calendar icons. The edit  icon (instead of having the task name always be editable)

We also added a "help" button, based on the heuristic evaluation. This question-mark seen in Figure 5 is always present in the bottom-left corner of the screen. Clicking it opens a new paper, with information about the main aspects of BucketList (including 'alerts', 'buckets', 'due-dates', etc). These topics were chosen by evaluating the website with users and observing what concepts confused them      
Figure 5: Help '?'     
  

      

Implementation

  • Describe the internals of your implementation, but keep the discussion on a high level. Discuss important design decisions you made in the implementation. Also discuss how implementation problems may have affected the usability of your interface.

Our implementation involved separate objects for all of the various entities in our UI: papers, stickies, the board itself, notes, tasks, and buckets.  These objects contained all necessary identifying information for those entities and allowed us to find the item on the board and manipulate it according to the user's specifications.  Originally our implementation limited the number of papers open at a time to one, but after doing user testing on our computer prototype, we learned that users wanted to be able to open multiple papers at once, so we changed our implementation to allow for many papers, each of which is individually closable.

We dealt with concurrency issues between different papers on the same board by having a "refreshBoard" function that refreshed every paper and note on the board, and the dropdown menus, whenever anything was changed. We also called the "refreshBoard" function after periodically pulling information from the database backend, ensuring that users would have up-to-date information that collaborators add from other computers. This worked well for our testing purposes, since it made sure that there was never inconsistent information about a task or bucket on the screen, but it also had the potential to slow the system down a lot if a user had too many papers open at once.  If this became a common problem for users, we could add some logic to make only papers with information that might change got refreshed.

In addition, we had to make some tradeoffs in the interest of time.  Unfortunately, we never quite got to work on the "Alert" functionality.  Having all of the affordances of an alert system without an actual alert system is obviously a usability drawback of our interface, but it highlights some important not-yet implemented functionality that will improve user experience.

Evaluation

  • Describe how you conducted your user test. Describe how you found your users and how representative they are of your target user population (but don't identify your users by name). Describe how the users were briefed and what tasks they performed; if you did a demo for them as part of your briefing, justify that decision. List the usability problems you found, and discuss how you might solve them.

User tests were conducted on 6 MIT students who keep to-do lists and must manage group projects (exactly our target audience), who we found by asking our friends. We began by asking some questions about how they currently do to manage to-do lists, and got answers ranging from "I write everything on my mirror or on scraps of paper" to "I use MGSD for group task-management, and then I import everything to Google tasks for due-dates and reminders". This ensured that some test subjects were familiar with shared task-management applications, while others just keep their own to-do lists, so that we got a wide range of responses.

We decided not to do a demo as part of the briefing, as we really wanted to see if users would be able to figure out how to accomplish key tasks, so we didn't want to show them first.

Reflection

  • Discuss what you learned over the course of the iterative design process. If you did it again, what would you do differently? Focus in this part not on the specific design decisions of your project (which you already discussed in the Design section), but instead on the meta-level decisions about your design process: your risk assessments, your decisions about what features to prototype and which prototype techniques to use, and how you evaluated the results of your observations.

The most important thing we learnt is that user testing is really important!

If we had to do it again, I think a heuristic evaluation of our initial designs would have been useful, to get some very early feedback on our design (GR2) before even doing the paper prototypes. We spoke to some classmates about our design, but a more in-depth review would have pointed out some obvious problems that we could have then avoided in the paper prototypes. We would also have done more rounds of testing with paper prototypes. We ended up throwing away a lot of code after doing a round of informal user testing on our initial design (before GR4), and a 3rd round of paper prototype testing would have brought up these issues before we began coding, saving us lots of time.

Reviewing the heuristic evaluations and deciding what advice was important and what we should ignore was also important. Some of the advice we got in our heuristic evaluation was very useful, but other advice was misguided (perhaps because the back-end functionality wasn't working yet). If we had blindly followed all the advice we got, I think we would've ended up scrapping a lot of key features that were important to our design but not clear in implementation as of GR4. Having a relatively advanced prototype as of GR4 helped a lot in this respect, though, because testers could get a better sense of our application.

Another aspect of the design process that was very important was the initial project proposal and analysis, background research, etc. Speaking to users and figuring out what they would actually want was really important and helpful in focusing our application and deciding what features were important. This also let us design our initial prototype with these features in mind, and get feedback on what to include and what to scrap. For example, the bulletin board was initially 

  • No labels