Prototype Photos

Prototype A

The first screen displays a list of layouts that currently exist in the system.

Once the user has selected Quad, the main quad layout screen is shown. The users were briefed that they were working with a quadrocopter, so this was a natural progression.

When the user clicks either the Distance to Wall graph or the Laser scan channel, the distance to wall chart is presented. Generally, we found users had difficulty with this screen, and in particular the drop down menus.

Distance to ground.

Charts

 

Note how in the Position charts cut off at approximately t = 12. This shows that there was a failure in the positioning system at this time (discovering this is the goal for Task 1.)

Layout creation screen.

Layout placement screen.


Publishing on channel interface. This page received positive feedback and a number of interesting comments on how the data-entry spinners worked.


Prototype B

We used a small notebook as a physical prop to represent an android phone.

We taped paper version of the phone to it and we were swapping small sheets of paper. Here, the home screen is shown.

If a user selects any of the channels, the screen slides left and one the four screens comes in its place (depending on which channel they selected)

In the new screen, the user sees thumbnail plots of the channel's components on the left and some statistics about the components (mean, variance, last value).

From the main screen the users also have the ability to click on the "Custom Plot" button which again will slide left the screen and let them create the custom plot. They can set the name of the custom plot (which is automatically hinted unless they change it) to something they want and they can select components from different channels. Each component would appear with its own color and connector line, and these will be shown immediately in the plot.

Another feature we decided to add was the ability to publish messages on a channel. Whenever they have opened a specific channel, they can tap the publish button next to the channel name and again a sliding animation will present them with a new screen on the right. 

In it, the specific channel on which they want to publish is shown and they can use a starting template (since sometimes the values in a channel may look like arbitrary real numbers from a human perspective). There are two types of starting templates ( the last on the channel and the last one the user published) for two types of uses. The values are setup via text fields using the on-screen keyboard.


Briefing

Hello,

Our project is a robotic system monitor on a mobile device (we chose Android).

Every robot is in some sense a computer which runs multiple processes such as sensors, logical units and actuators. These processes communicate with each other usually via a shared medium. In the specific types of systems that we consider the communications happens on different channels, and every process can decide whether to publish or subscribe to any channels. There are messages sent on every channel, where each message is a certain data structure - different for every channel.

The purpose of our interface is to allow the user to observe the data flowing on these channels as well as to allow the ability to publish messages in order to alter the robot state.

Scenario

We sketched a small map to help explain to users the senario we were trying to introduce them too.  We found that spending a minute or two with this map (and moving the "robot" around on it) helped our users provide us with better feedback.

Tasks

Task 1: Debugging the robot

Use the interface to find what is wrong with the robot.
(successful completion of this task means that the user have found the plot of the position channel and it has sharp discontinuity representing a bug)

Task 2: Plot multiple graphs on the same plot

Combine all the position channel fields in one figure.

Task 3: Publish a message on a channel

Publish a new waypoint at x=10.5, y=0


Observations

User 1 (Wednesday)

(prototype A)

Our first user was confused about the background storyline and only completed the first task. We learned that a more in-depth explanation would be necessary for our future tests.

This user also had difficulty understanding what the drop-down menus were supposed to do. She had trouble figuring out what they meant.

Finally, when using several drop-down menus with dependencies (i.e. you set the first one and the options in the second change) we must ensure that there are ques to show this connection.

User 2 (Wednesday)

(prototype B)

The user understood the domain description much better, but had some difficulty determining if a plot was "normal" or "buggy." We learned that the visualizations will need to present the data so well that the assessment whether each channel is buggy or not should happen at prima-vista.

The user also understood the affordances in the screens for tasks 2 and 3 which was encouraging for prototype B. The user also used the correct navigation paths through the interface almost immediately. He didn't seem to take notice of many of the interface elements if they were not directly in the task-completion path.

User 3 (Friday)

(modified prototype B)

We felt this user was closest to potential users of the application because she understood the semantics of the scenario. The first thing that we learned from observing her was that she tried to make a plot from a channel info page. It was intuitive to expect that one can make a plot from the channel's information screen, but we didn't have it in our prototype - we plan to include it in our further iterations.

In addition, when making the plot, she was confused about the difference between pressing OK and Cancel. We didn't have very intuitive answer - the new plot should appear somehow in the main screen. This is an aspect of the interface we need to polish.

Another comment from the user is that it would have been useful if there was a generic "Publish" ability instead of going in a channel first. We think this is a good suggestion and we'll work towards adding this functionality.

In this version, we modified the prototype by substituting the publishing interface from prototype B with the publishing interface from prototype A. Our user was able to understand the A interface. We also asked her about whether she thought a text field would be better, and through discussion we reached the conclusion that maybe the best interface would feature a combination on a text field with push-scrolling capability (like a toy-car remote control button).

User 4 (Friday)

(prototype A)

We decided to test prototype A again with another user, helping us learn more lot about the interface's deficiencies.

First, it was clear that the layout screen on paper lacked the affordances we wanted to give.

Next, we realized that the multi-channel presentation wasn't intuitive to the user. She could start changing a certain component of a channel, and when she changed the channel the other two drop-down menus were changed. There were dependencies between the drop-down menus which were not highlighted properly in the interface. In addition, the single-plot in the bottom didn't give information about the other channels - this was a clear visibility problem.

Prototype iteration

We chose to start with two prototypes, derived from GR2. We began with our two most diverse designs created a paper prototype based around those each designs. We chose to implement the prototype proposed by the other team member. In this way, we understood each other's ideas better and were able to avoid emotional attachment to specific features in the designs. After the tests on Wednesday we swapped certain parts between prototype A and B to compare the differences.

Our next prototype will consist of merging the ideas that we found work in each prototype after the in-class testing.

Discussion

We are targeting a narrow user base (robotics researchers and UROPs), so one of our biggest hurdles in this stage was grounding the subject users in the task domain. In order for a user to operate our interface they need to have a certain background which cannot fully and deeply be explained in 5 minutes.

Moreover, we discovered that it is difficult to create some affordances in paper prototypes with low fidelity of look. Good mobile interfaces rely on such cues and techniques in order to stand out. In short, paper prototype is useful for brainstorming design ideas for mobile interfaces, but we believe that the gap between paper prototype and pixel-perfect prototypes is larger than on more traditional devices.

  • No labels

2 Comments

    • Excellent photographs and discussion of the prototype itself
    • Interesting choice in choosing a more or less to-scale prototype, instead of a larger. I think it makes sense since the actual interface has to be "finger-friendly" anyways
    • Good effort in trying to develop a briefing to educate your testers
    • You were expected to test 6 users, I only see comments for 4
    1. Thank you for the positive comments. We wished we had tested more subjects but we only were able to complete 4 during class. Our briefing took significant time, but in return we got very detailed feedback and comments from our subjects and we are happy with the information we got so far. We will be looking to get more testing with the computer prototype.