You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Current »

Prototype iteration Round 1

This is the original menu for Prototype Round 1. Most people despite given the arrow, were confused as to what to do with it. They tried clicking on the items in the menu, rather than moving the wheel/Lazy Susan affordance.

Prototype iteration Round 2 (this has a more detailed round of pictures)

The main loading page for both iterations which appears when people open up menu.io. The rotating circle appears until all the restaurants nearby are loaded.

 

The page loads all the restaurants nearby, the type of food they offer underneath, and the distance they are from the user. The user can click Flour Cafe to open up the relevant menu as described in Task 1.

The main menu shows the current selection of items on the top, and the current selection of Drinks (with a summary attached).

Some scrolling on the top brings the user to Sandwiches, which he/she can click on to perform Task 2.

The unfiltered submenu shows the list of sandwiches available with the selection currently chicken, and showing the details of chicken (not included for the paper prototype).

Applying both the filters such as done in Task 3 allows users to see a limited set of options. Filters can be applied by clicking the tag, and reversed by clicking the X next to the tag, which produces a different set of options.

Removing the vegetarian filter (Task 4) and scrolling through the menu brings the user to the portabello mushroom to read its details (Task 5).

 

Welcome to menu.io! We are innovating in the domain of Restaurants, specifically menus, trying to expand on the lack of interactivity and dynamic content. We’ll be taking you through one use scenario of our app’s basic functionality. Remember that this test isn’t testing you, the user, but rather the interface and its usability. Feel free to think out loud so we know how you feel about each element of the interface. You’ll have the opportunity to ask questions and give us higher-level feedback when we’re done. Thanks!

We gave our users tasks similar to those given in our scenario from GR2.

Task 1
You have just arrived at your favorite restaurant, Flour Cafe, and want to see the menu. Open up the app and select Flour Cafe.

Task 2
You want to eat a sandwich. Open up the menu that only shows sandwich items.

Task 3
Apply filters to the menu so that you only see vegetarian and gluten-free items.

Task 4
You realize that you are not actually interested in gluten-free items. Remove the gluten-free filter.

Task 5
Find the portabello mushroom sandwich and reading it’s ingredients.

User 1

  • Waited carefully on the loading screen
  • Picked Flour Cafe correctly, did not wait much for other options
  • Instead of spinning the design wheel, clicked the food item and expected it to change
  • Was confused, but then realized spinning would make it change
  • Played around with the spinning for a bit, but more out of confusion than exploration
  • Expected the food item to go to the middle when clicked
  • Moved the tags to the center with some hesitation; was surprised when the tag moved to the top of the page.
  • Got the gluten-free filter correct
  • Removed the tags with ease, and could scroll once the wheel was learned to read the ingredients
  • Passed all the tasks

User 2

  • Waited appropriately for the loading
  • Liked the big affordance of the spinner for loading
  • Got the task for the GPS error correct (we simulated a GPS error here) by typing "Flour" into the keyboard
  • Liked the wheel aesthetically as a metaphor
  • Found the wheel confusing  - was not sure what to do with it.
  • Eventually, realized that turning the wheel could result in the correct item appearing.
  • Was not sure what the center can do (tags have to be dragged here, or should the food item change, or both?)
  • Liked the ability of filters to adjust food preferences
  • Removed tags quickly and found the portabello mushroom
  • Passed all the tasks

User 3

  • Did not understand the general spinning/loading screen
  • Could not figure out the type of restaurant
  • Dragged the tags naturally - the first user to easily do so
  • Was surprised by the move of the tags to the top
  • Was able to remove filter successfully
  • Not sure about the wheel click, but was certain of the wheel selection because of the provided downarrow
  • Was the fastest out of all 3 users, though it looked like he was waiting for some more detail to pop-up
  • Passed all the tasks

For our first iteration, we used our riskiest prototype. We felt that the scroll wheel would be a fun feature for users to play with and we liked the fact that it would be relatively novel to our user interface. We also liked the idea of a metaphor to the Lazy Susan on a real dinner table. We knew it was a risk, but we wanted to see whether users would be able to figure it out easily. However, each user during our first iteration told us the same thing: the scroll wheel was confusing. The first said that it was hard to tell that it was a scroll wheel at all, and we figured that was because of the low fidelity of our paper prototype. We tried adding in an arrow at the top of the wheel to indicate that it was meant to scroll. Even though this helped users recognize it, they mentioned that it would be frustrating to use on a tiny cell phone because the border icons would be crowded. We ultimately decided that it wasn’t worth making a higher fidelity (i.e. computer) prototype to see whether users could recognize the scroll wheel feature.

Because users liked having the ability to see a larger view of their dishes while still being able to scroll through the dishes, for our second iteration, we moved to the grid-display design. Even within this design, we made some changes. Users found it confusing that the large display reflected the middle icon in the scroll wheel, so we added an bubble and arrow from the middle icon to better show this. After this, the grid-display design was very well-received.

For our first prototype, we used drag-and-drop tags to filter the menu. We felt this was also a risk because it isn’t widely used by many interfaces. Unlike the scroll wheel, however, users appreciated this feature. We made several changes throughout our testing. In our first iteration, we had the tags jump from one part of the screen to another when the user clicked or dragged the tag. Most users just clicked the tags; most did not know whether to drag it or not. We realized that we could better utilize our screen space since users didn’t expect the tags to move. For our second iteration, we tried leaving the tags in the same spot but just adding an x next to the tags in use, and telling the users that the tags changed color. Users found this confusing, so we decided that it would be better to have the tags move to the top of the screen. We also originally had a tooltip to inform the user that the tags were for filtering, but users in the first iteration told us that this was pretty intuitive, so we removed the tooltip for the second iteration.

The GPS feature was well received by every user. We did one trial with a user where the GPS “glitched” and the user was forced to enter their restaurant into the keypad. This too was received well. We didn’t feel a need to change the GPS feature from one iteration to the next.

User 1

  • Wanted to see the loading progress, rather than just a wheel
  • Picked Flour cafe correctly
  • Got the scrolling on the food items correct
  • Was initially confused about the submenu, but then the large afforandance of the item selection made it possible to find the sandwich correctly
  • Thought the flow was good, but felt the arrows were a bit weird at first
  • Passed all tasks

User 2

  • Selected Flour cafe correctly
  • Less reactive of a user
  • Expected a specific list of items to be display, when clicking on a particular submenu and not an opening of that submenu
  • Correct selection on food
  • Was confused about the particular submenu
  • Felt that interface was simple (in a good way)
  • Wanted a better afforandance on what was applied, and what was not

User 3

  • A slower user, browsed thoughtfully
  • Wanted to explore
  • Felt that location with a lot of restaurants might be harder and take longer to load
  • Wanted one page with all the items, felt it could be hard to remember selections
  • Was confused about how to rotate through menu initially, but got the hang of it after he looked at the arrows
  • Desired price to be in the interface more specifically
  • Passed all the tasks

Initially, we were unsure whether our paper prototypes should be higher fidelity. However, we encountered a couple of groups with higher fidelity prototypes (when testing their prototypes), and truly it was hard for us to give more feedback because we felt they had put in a lot of work into carefully adjusting certain aspects of their prototypes. It was cool to see a lecture-finding applied in real-time.

After the second round, we also asked ourselves a couple of questions that we got as general feedback outside of tasks, and that we might want to add to our next round of prototyping:

  • Should we do mixed grid/scrolling layout - so that people can see what they want more easily?
  • Should our menu be ordered by course - just as it is many restaurants - with appetizers first, main course second, and the dessert third? This might follow a natural progression.
  • Should we have some kind of record feature to allow people to store items they liked so that they can come back it to (or would it burden us too much and distract from the main tasks)?
  • Is price a consideration - should we include some kind of price adder (or would it again burden us too much)?
  • How else can we optimize viewing on a limited experience? Would perhaps the natural affordance of a grid work better? From our original designs, we used the two most riskiest ones for the prototyping round. The second one was more well-received, but perhaps a grid is a more familiar affordance, and users might have an easier time exploring?
  • How can we include pictures more so that users can zoom in more specifically to actually view a dish? Picture-based viewing might be more intuitive.
  • What are other details users want to view for a particular dish?

The two rounds of prototyping definitely provided valuable feedback on what worked and what did not.

  • No labels