Paper Prototyping

Prototype photos:

The photographs below show the first iteration of the paper prototype. The prototype aims to test the interface for the following:

From Arjun (a parent back home who want to get an immersive visual experience of his son's stories)'s perspective

  • ability to understand the interface in order to use it for making/receiving calls
  • intuitively switch between different modes of location exploration
  • view/post comments
  • view shared tweets/posts/images etc..

From Raj (a professional who wants to communicate his experience to his parents)'s perspective:

  • ability to understand the interface in order to use it for making/receiving calls
  • view/post comments
  • ability to guide parents in exploring location
  • ability to share tweets/posts/images

 

 

 

Main Screen

  • Raj sees all his contacts with photos
  • Hovering/Arrow gives call/text options
  • Clicking on the photo by default starts a new call


Screen showing an incoming call (for Arjun).

  • Name/Profile Photo shown (Raj is calling)
  • Ability to accept/decline

Conversation Starts (In conversation View):

  • In conversation
  • Raj starts talking about MIT Charles river 
  • Raj is sharing the location to Arjun
  • Arjun gets this prompt

Interface for viewing the location.

  • Arjun has clicked OK and views this location
  • This is interactive 3-D view (see Navigation Control below)
  • Arjun starts navigating by changing camera angle, height etc.
  • He also sees comments (if Raj has put any)

Interface for adding a comment to a location.

  • Raj wants to describe some specific things
  • He can put comments and point it (like Entrance to MIT).
  • When Arjun views/explores/navigates the location, he sees it.
  • Arjun too can add comments.

Interface for viewing the notes.

  • Arjun sees all the notes whenever he navigates the location
    • (realtime as well as offline)

Interface with multiple notes.

  • This is what Arjun sees if there are multiple comments
  • '#' on a sticky note as an indicator of comments
  • It can be expanded by clicking

Screen for viewing the tweets and links to the available
articles and maps.

Note: This window is like FB news feed where Arjun sees tweets
from all people he follows. A filter by putting a person name can 
be applied.

  • Raj has connect his twitter account with teleport
  • Arjun can now see all Raj's tweets 
  • One of Raj's tweet is about Yosemite National Park
  • Interface automatically suggests Arjun 
    • if he wants to see Yosemite National Park pictures 
    • or articles related to this location

Safety prompt window asking for a confirmation to change the viewing
mode.

  • Arjun has clicked on pictures
  • A prompt window to ensure that it wasn't an accidental click
    • because it will change the mode
    • user might get lost if we didn't ask the prompt.

Image viewer.

  • An  efficient interface for picture navigation
  • a preview below (thumbnail view)
  • clicking on the preview shows the image in the actual window

Briefing:

A general briefing:

Thank you for participating and taking your time to test our prototype.

This is a very early prototype and the goal is to evaluate if our design is easy to use, and intuitive. We will ask you to do some basic tasks using our system and that would really help us understand usability/design
issues with our system. At any point of time, please feel free to tell us anything you see confusing or you found difficult to understand as that would really help us improve the design.

Please think aloud while performing the tasks so that we can understand any assumptions, issues, and inconsistencies with our design.

Thanks again for participating! :)

Briefing for 1st set of users (Arjun's perspective):

We are trying to design an application (similar to Skype) for parents living back home so that when they talk with their children, they can get an immersive visual experience of the the stories their loved ones tell.

  • You will act as a 40-year old father named Arjun.
  • Your son is Raj, a college student at MIT in Cambridge, MA.
  • You are to talk to Raj and would really like to visually recreate the stories that Raj tells you.
  • You will be using your home desktop computer to talk to Raj.

Briefing for 2nd set of users (Raj's perspective):

We are trying to design an application (similar to Skype) that allows people like us to share stories with our parents living back home in such a way that they get an immersive visual experience of the the stories we tell.

  • You will act as a 20-year old student studying at MIT.
  • Your father is Arjun living in India.
  • You are to talk to Arjun and would really like to communicate your experience of living at MIT.
  • You will be using your home desktop computer to talk to Arjun.

Scenario/Tasks:

1st set of users (Arjun's perspective)

Scenario

You are Arjun! Assume you have a son Raj who will call you in a moment. While talking he will tell you a story about what he did, will share with you some location, tweets and photos.

Tasks:

  1. Raj is calling -- receive the call from Raj
  2. Raj is sharing a location -- please accept
  3. Try navigating the shared location 
    1. walk ahead
    2. turn left
    3. change camera angle to top view
    4. lower the height of the camera to see more closely
  4. There are a bunch of notes Raj has put on the location
    1. open the note
    2. put a comment
  5. Check if Raj has put some tweets
  6. Read tweets
  7. Check if Raj jas put some images
  8. Browse the image gallery

2nd set of users (Raj's perspective)

Scenario

You are Raj! You will call your father Arjun and share with him your experience of your morning walk of Charles River.

Note: Since our focus is primarily Arjun, we didn't go in the complicated tasks for Raj -- tasks like connecting to twitter etc... For Raj, we only did making calls, sharing of locations, adding notes/comments and replying to a comment. 

Tasks:

  1. Make a call to Arjun
  2. Share Charles River
  3. Observer what Arjun is doing 
    1. see Arjun navigate
  4. Arjun is zooming on the boats
    1. put a note about the boats
  5. Arjun has commented on your note
    1. put a comment back to Arjun

User testing observations (Round 1):

User 1 (Acting as Arjun):

Observations
  • User ignored panning and altitude controls. (visibility issue -- learnability)
  • Navigate icon was mistaken for a button. (consistency issue -- learnability)
  • When looking through tweets the user tried clicking on the text to view additional information rather than using the scroll.
  • User ignored aggregation panels: map, articles.
Feedback
  • Was not obvious how to exit a mode.
  • Found the voice transcription natural in the context of application though generally the user does not like it.
  • Got irritated with the pop up window.
  • Suggested that direct manipulation with a mouse would seem more natural than using the provided controls.

User 2 (Acting as Arjun):

Observations
  • Likes exploring the interface, testing different navigation controls.
  • Had difficulty finding how to move from the image viewing mode to reading tweets mode. (learnability issue)
  • Does not move directly to address the task but takes extra steps to explore the interface. For instance instead of directly viewing the location, goes to the tweets tab first. (efficiency issue)
Feedback
  • Suggested it would be easier if mode change buttons were located within the main active window, not only from the bar above.
  • No option to decline an invitation to view a comment. Interface becomes overcrowded with small windows.

User 3 (Acting as Arjun):

Observations
  • Confuses navigate symbol for a button (bad internal consistency).
  • Likes navigating, using different control buttons.
  • No feedback that message has been saved. When adding a comment is concerned if the comment was saved or not. (system status not visible)
  • Easily finds exit buttons.
Feedback
  • Wished there were smoother transitions between top-panel and bottom-panel navigation.
    For instance, when taking a street view bottom-pannel navigation seems more natural.
  • When hitting exit would like to see a confirmation that the comments added have been saved. (safety)
  • Suggested that dragging with a mouse when exploring the street view would feel more natural than using some of the buttons we provided. (efficiency?)

User 4 (Acting as Raj):

Observations
  • Since tasks were simple, very few surprises
  • No notification that the note has been saved and Arjun saw the note.
  • No notification that Arjun saw the comment.
Feedback
  • Need notification with comments/notes

Feedback summary (What we learned)

Inconsistency with expectations:

  • Users ignored panning and altitude controls.  (learnability issue)
  • Some description elements mistaken for buttons. (learnability issue)
  • Modal navigation in top panel not clear. (learnability issue)
  • Users ignored aggregation panels: map, articles. (learnability issue)
  • No feedback that message has been saved. (safety issue)
  • No feedback on whether Arjun read the note or not (design issue - missing feature)

Consistency with expectations:

  • Users found it easy to receive and accept calls. (efficient + learnable) 
  • Took the time to explore locations in the google earth mode. (engaging)
  • Gave positive feedback about being able to interactively view the location, while talking. (usability validated)

Prototype iteration:

When testing the first prototype which has some navigation links in the top panel and some in the bottom panel we found that users found it confusing (learnability issue)

Our second prototype aims to provide

  • consistent navigation layout for all the three exploration modes: google earth view, images and tweets. 
  • fix internal inconsistency issues between labels and buttons; 
  • safety pop up windows to facilitate adding/saving messages and exiting modes; 
  • feedback -- notes saved, "seen" feedback when the message is read on the other end
  • minimises the number of icons and controls. 
  • although we did not encounter issues with the interface being overcrowded, knowing that our targeted user group is mainly elderly we want to maximise the easy of navigation by minimizing cognitive load.

Changes incorporated before the next round of testing:

  • Labels look significantly different from buttons.
  • A high visibility save button added to the type comments window for saving input.
  • A pop-up window asking "Would you like to save your comment?" appears when user attempts to exit the comments mode.
  • Bottom panel controls for switching between modes are added to the main active window.
  • The left panel showing contacts and history is collapsed when user enters an exploration mode. The panel can be expanded by clicking an icon in the top left corner. The motivation behind this addition is to reduce cognitive load and maximise exploration area. 

User testing observations (Round 2):

User 1 (Acting as Arjun):

Observations
  • The user got confused with camera controls and navigation controls.
  • had some difficulty deciding between camera controls and navigation controls (for first few minutes)
  • Even after the first few minutes, he repeatedly used 'Up arrow' for moving forward instead of 'W'
  • He was very good with navigating through images.
Feedback
  • Loved the idea of communicating a location in an interactive way
  • found hard to remember 8 different controls (4 for camera, 4 for navigation)

User 2 (Acting as Arjun):

Observations
  • Tried smartphone like zooming (pinching -- we had to remind him that it's a web based application)
  • an 'aha' moment -- "this is f****ing cool! can I save this location view and forward it to my friends?"
Feedback
  • Really liked the idea and the design
  • He said he would definitely use it if something like this was available

User 3 (Acting as Raj):

Observations
  • simple tasks, smoothly done
  • he asked about tagging images (we gave him a hint that you can put "a note" but he emphasized the word "tagging")
Feedback
  • he asked about email subscription -- can I subscribe to get an email notification when Arjun puts a comment on his notes.

What we learned from Testing Round 2

Issues:

  • Too many controls if user uses only keyboard contols -- 4 keys for camera + 4 keys for movement  (how to fix this?)
  • Tagging was confused with Notes/Comments w.r.t images. (learnability/consistency issue)
  • some additional feature requests like -- email subscription

Good Feedback:

  • Our fixes in iteration one has definitly improved the user experience a lot. 
  • We continuously heard that they find the system very engaging .
  • No labels

1 Comment

  1. Prototype: Great prototype! I appreciate the explanations with each panel-just make sure this functionality is apparent to your users.
    Briefing & scenario tasks: I like how you separated your briefing/tasks based on the user group!
    Overall: Well done!!