Design

Login and Account Creation


The user begins by arriving at the login screen. We aimed for simplicity in this screen: there is a login and create account tab, each of which contain labeled fields. The create account tab has some features designed towards fast error detection, namely that a message will be displayed if the username is already taken, immediately as the user is typing; likewise if the repeated password doesn't match the original, a message appears when then user finishes typing (as opposed to only making these errors visible once the user clicks the create account button). During user testing, User 2 noticed the immediate detection of the mistyped password, and commented that it was a useful feature. Additionally, the login and create account allow for the keyboard to be used for efficiency; namely the user can tab through the input fields, and the user can press the Enter button as opposed to needing to click on the widget. During testing, Users 1 and 2 both created their accounts by pressing the Enter key instead of clicking the widget, so this feature turned out to be useful.

Main Page

Once the user logs in, he is greeted with the main page, which is split into a left sidebar for displaying and managing the study focus and vocabulary, as well as a right half which displays sentences. We chose to have both of these displayed at once, as opposed to haivng the vocabulary management and sentence viewing components in separate tabs, based on the results of the first paper prototyping iteration, where users encountered visibility issues due to having the vocabulary hidden away in a tab, and were generally dissatisfied about having to frequently switch between the two tabs.

Left Side Page

Study Focus

The purpose of the study focus is to highlight the user-selected vocab word that will be used in each new example sentence generated. The design originally had the study focus as a text-display of the word currently being the focus of all the example sentences. However, due to user complaints about both visibility and navigation control, the study focus display was changed to a drop-down selection, which keeps a history of all the previous study focuses. Additionally, the popped-out drop-down selection also improves information scent. Finally, the study focus word is color coated as teal, which extends to the color coating of the study focus word in the example sentences.

Vocabulary Search

The purpose of the vocab selection screen is to group each vocabulary into a "vocabulary block," which provides all the properties of each vocabulary. Several improvements were made to this design due to user complaints about visibility and efficiency. First, an alternating color scheme of white and light blue was chosen to mark contrast and boundaries among the different vocab blocks. This makes the overall design "less busy" because of clear chunking of information by colors and separations. Additionally, the three elements of vocab (PinYin, English, and Displayed) are clickable buttons that sort the vocabulary according to alphabetical order for the selected element. In this picture, the vocabulary is being sorted by alphabetical order according to PinYin, as marked by the downward black arrow. In addition to assigning a vocab as a study focus, users can also check "May appear in sentences" option for familiar vocab words, changing their color coding in sentences to red (see Sentence Viewing).

Right Side Page 

Sentence Viewing

 We chose buttons for displaying the vocab words, as opposed to underlines, because this appeared to indicate the clickability affordance the best during user testing. Because some users had difficulty going back to previously viewed sentences during paper prototyping (where we initially displayed only one sentence at a time), we instead made it such that all previously fetched sentences would be displayed in a scrollable list. However, during computer testing, users complained about how once they had fetched a sentence, it wasn't possible to remove it from the list of displayed sentences, hence we introduced a Close button to remove it from the list. However, as users in our own independent testing during GR5 were confused as to whether closing a sentence simply removed it from immediate view or prevented it from ever being fetched again, and because the error of accidentally closing a sentence was not easily recoverable, then we introduced a "Closed Sentences" tab where closed sentences could be found and restored. Notice how in all the sentence blocks, the study focus word is highlighted in blue, the "checked" vocabs (see Vocabulary Search) are displayed in red, and non-selected words are in grey.

 

The "Currently Reading" tab gives users the option of fetching new sentences for the current study focus vocab (highlighted in blue both in the study focus category and in the current sentences). The yellow highlight greatly increases the locus of attention to certain warnings or instructions under the "fetch next sentence" button. Note that each sentence block is separated by alternating blue/white colors and the "Close" button for each sentence block. Clicking on the "Close" button will move the corresponding sentence block to the "Closed Sentences" tab, in a stack-like fashion (chose for better feedback).
The decision was made to allow all words in each sentence to have the affordance of a button and be clickable to display vocab information and manipulation options. The goal was to increase both the efficiency and learnability of the design, allowing users to find out information about vocab words with a simple click (learnability) and allowing users to efficiently make a vocab the study focus or "allow word to be displayed in sentences" (efficiency) without having to search for the word on the left hand side. 

The "Closed Sentences" tab serves almost like a recycling bin. Creating a place for users to dispose and restore sentence blocks. Additionally, to improve efficiency in terms of scalability, a search option is added to allow users to filter through closed sentences, by either PinYin, English, or Chinese. The reason a "Closed Sentences" tab was implemented, as explained above, is due to overwhelming reader request for efficiency and navigation. Although this increases the complexity of the design, it offers better visibility and user control since it allows the user to "clean up" the "Currently Reading" list of sentences. 

Sentence Contribution

The Sentence Contribution tab allows the user to enter a sentence, and once submitted, the contributed sentence can be viewed below (and deleted). The reason we have contributed sentences displayed below is that during our paper prototyping, where contributing a sentence was simply modal popup dialog, the users were unsure where the sentence they had contributed had gone, or how they could delete it from the database if they had accidentally typed the wrong thing. 

Implementation

Our user interface runs in-browser using Silverlight, and is implemented using C#. It uses Silverlight's layout system to organize the widgets, and is therefore resizable.

One problem that we had to circumvent was the long draw time of the word search in the left side bar. In spite of the clipping stage of the renderer, due to the sheer volume of off-canvas things that needed to be drawn (due to the large size of the lexicon), the GUI would block for long periods of time if using stock widgets and layouts. We therefore implemented an algorithm that reduced the amount of processing of off-screen clipping (by computing the visible contents based on scrollbar position and adding only those widgets to the layout), which alleviated this issue.

With regards to our sentence segmentation code, we use a lexicon-based segmentation approach. Thus, if a user contributes a new sentence, then provided that the words are in the lexicon, the sentence will be properly segmented, and the sentence can be found by making any of those words it is segmented into the study focus. Because our lexicon is extremely large and comprehensive (several megabytes), then the words in a sentence are usually found in the lexicon, so the sentence is properly segmented. Additionally, this means that when a user contributes a sentence, he doesn't need to segment the sentence himself, or provide any tagging for the words - he simply inputs the sentence, and it is automatically segmented and annotated with its vocab words. However, this automatic lexicon-based segmentation and word extraction also prevents users from being able to use out-of-lexicon terms in their contributed sentences (they can use them, but they will not be annotated with that vocab word).

The server-side code, which stores login credentials and the study focus, study focus history, words that are currently allowed to be displayed, sentences that are currently displayed, and contributed sentences for each user, is implemented as set of ASP.NET scripts which communicate with a MySQL database instance.

One design decision we made to simplify implementation was that all information - namely, the list of sentences, the words, and their translations and romanizations - would be retrieved from the server at login time. This was meant to ensure that no latency from communicating with the server would be seen once the user has been logged in and is using the main interface. However, one usability issue which resulted was that login times tend to be long due to all the downloading that is taking place. We managed to alleviate this by having shared portions of the data that would need to be downloaded by all the users (for example, the word database) be downloaded in the background while the user was still entering his login credentials at the login screen. Additionally, we have a loading screen with an animated progress bar to provide feedback for the user while the page is loading. Another issue which results from our decision to download all data from the server at login time is that if other users contribute a sentence, then the sentence won't be observed until the next time the user logs in.

Another design decision we made is changes made by the user (the study focus history, the words that are allowed to be displayed, the lists of displayed sentences, and the contributed sentences) are sent immediately to the server. This was done to ensure that the user doesn't have to press a "save" or "logout" button to preserve changes (which would have been a source of error if the user closed the browser without saving changes or logging out). However, because there would have been high latency (leading to an unresponsive interface) if the application waited for a response from the server in the GUI thread, then these requests are instead queued by the application and sent in a separate thread. An unintended consequence is that if the user makes a change (say, selects a new study focus) and immediately quits the browser before the server communication thread can send the update to the server, then that change will be lost when the user next logs in. However, because the system is still in a consistent state upon login (namely, it is in the state it was in immediately before the action occurred), and the user can easily observe the system state and redo the action, then this is not a severe issue.

Evaluation

We found our users by asking people who we knew were not experts in the language but had at least academic exposure. These people are part of our target population because they had a desire to review the vocabulary they already knew.

Briefing

Thank you for participating in our user interface test. Your participation will allow us to find problems with our interface and will help us to build a more user-friendly interface.

In the following user testing, you will be reviewing Mandarin Chinese. The review uses the textbook "Learning Chinese: A Foundation Course for Mandarin Chinese", and you want to practice reading sentences that use the words found in this book. To do so, you're using Reading Practice, a web application that helps members practice reading in their language of study by providing a database of sentences in that language to read.

In order to help us understand the user’s experience, we ask that you think aloud and ask about any uncertainties that you may have. Remember, this test is entirely voluntary, and you may stop at anytime.

Scenario Tasks

1. Create an account and log in.

2. Enable all vocabulary from Chapter 1 of your textbook, "Learning Chinese: A Foundation Course for Mandarin Chinese" to be displayed in sentences.

3. Find a sentence containing 好 (hao4), read it, and find out its English translation

4. This sentence contains 忙. Find out its pronunciation and definition, and find another sentence containing it. (In the event that the displayed sentence didn't contain this word, one of the words present in the currently displayed sentence was used for this task).

5. Prevent 我 (wo3) from appearing in subsequently fetched sentences

6. Make it such that the 2 sentences currently being displayed in the "Currently Reading" tab are no longer displayed.

7. Restore one of the sentences that you just removed from display back to the "Currently Reading" tab.

8. Find another sentence containing 好. Note that this was the previous study focus.

9. Now you want to return to a general review of all words you know, and aren't focused on studying a particular word. Switch to a general review of all words, and fetch 2 sentences for review.

10. Contribute a sentence.

Observations during User Testing

User 1:

This user grew up in a China for part of her life and moved to America when she was 12 years old. She said that she hadn't practiced reading in a long time but wanted to be able to review. Hence, the user could benefit from using this application, and thus matches the target audience.

Task 1

The user had no trouble creating an account. Although at log in, the user complained about waiting too long at the loading screen.

Task 2

The user selected the correct book and chapter. However, the user did not understand how to enable the words at first. At first she attempted to press the "Make study focus" buttons, but then realized that clicking multiple buttons undid the actions of previous button clicks. After realizing this, she then found the "Check All" button and used it.

Task 3

The user did not know how to complete this task alone. She did not know what the "Make study focus" button was supposed to do. When told what the button does, she did not know how to fetch a sentence. She complained that the panel below the 'fetch next sentence' button was blank and unintuitive what the panel was used for. She also complained about the wording used in the button, saying that it wasn't descriptive of the task of getting new sentences using checked words. She suggested teh wording "find sentences with checked words".

Task 4

The user had no problem with looking up the definition using the clickable buttons in the sentence and the translation. She focused the word by clicking the button on the popup.

Task 5

The user didn't have a problem forbidding the word from being shown.

Task 6

The user had no problem closing the sentences.

Task 7

The user had trouble locating the Closed Sentences tab and restoring the sentence. Although, she did complain about cosmetic issues. In particular, she thought there should be more horizontal spacing of sentences.

Task 8

The user had trouble noticing the study focus history button. So she searched for the desired focus word instead in order to change the focus. When asked why she didn't notice it, she said that it was too small and not obviously placed. This suggests that we need to design a way to grab the user's locus of attention better when the study focus is changed so that users will understand where the button is.

Task 9

The user had no problem with this task after having clicked the study focus history button in the previous step. However, the user suggested making the state of the system obvious. For example, she suggested using tabs to indicate that the state is in either "General Review" mode or "Study Focus" mode.

Task 10

The user had no problem with this step.

User 2:

This user grew up in a Chinese-speaking environment, and has basic reading proficiency. He was able to read the sentences that he encountered while doing the tasks for this user test. However, after the user test, when he enabled the remainder of the vocabulary, there were words that he needed to click on to discover their pronunciation and meaning. Hence, the user could benefit from using this application, and thus matches the target audience.

Task 1

The user had no difficulty going to the Create Account tab, entering his username and password (switching between the fields using tab), and having the account created and logged in after pressing enter. The user initially typed the repeat of his password in incorrectly (apparently deliberately), and was pleased to see the immediate feedback that appeared that the entered passwords were different.

Task 2

The user had no difficulty locating and using the Textbook and Chapter selection combo-boxes on the left side. However, after the chapter was selected, the user wasn't entirely sure what to do. He first clicked on the "Displayed" sorting button (which sorts the words according to whether they are allowed to be displayed or not). Then he checked the "May appear in sentences" option for some individual words. At that point he believed he had accomplished the task, and started fetching sentences. After being informed that he hadn't actually accomplished the task, the user noticed the "check all" button and clicked it (which is how we had expected the task to be done).

The user later cited that part of his source of difficulty was that the "May appear in sentences" label was cut off (displaying only "May appear") because the user's browser window was not maximized, and was at the time smaller than the minimum size we had designed for. However, the larger usability issue in this case was that the "check all" button's location did not make it discoverable, or that its label did not carry sufficient information scent.

Task 3

The user first tried pressing Ctrl-F to search for the word using the browser's built-in search. In our case, however, because we had implemented the site using Silverlight not HTML, the browser can't search the page. After noticing that he wasn't getting when searching via the browser's built-in search, he quickly switched to the search box. The user didn't have any issues locating the search box or using it. After typing in the pinyin for the word, he found the word in the search results, but initially he checked the "May appear in words" checkbox, and didn't click the "make study focus" button. Only after fetching a few sentences and noticing that the desired word was not present did he focus his attention back to the left sidebar, and clicked the "make study focus" button. He clearly noticed the change this time, as he hovered his mouse around the (now highlighted in teal) combobox displaying the word in the "study focus" section and faintly mumbled "ah that's it". He then pressed the "fetch next sentence" button to fetch the sentence, as expected.

Task 4

The user clicked on the word in the sentence to display its pronunciation and translation. He clicked the "make this word the study focus" as expected to make it the study focus, and then pressed the "fetch next sentence" button to fetch the next sentence.

Task 5

The user typed the pinyin in the search box to locate the word, and unchecked the "May appear in sentences" button. He then pressed the fetch next sentence button a few times to verify that the word indeed didn't show up to convince himself that he had actually accomplished the task.

Task 6

The user had no difficulty locating the close button for the sentence and clicking it.

Task 7

Immediately after accomplishing the previous task and before the user was shown this task, the user switched to the Closed Sentences tab and pressed the "Remove sentence" button repeatedly until all the sentences were removed. Thus, because there were no sentences to restore, the facilitator asked the user to first fetch a new sentence and redo the previous task before proceeding. After fetching the new sentence and clicking the Close button, the user switched to the Closed Sentences tab, and pressed the Restore Sentence button. He then switched back to the Currently Reading tab, saw the sentence he had just restored, and repeated the process of closing it and restoring it from the Closed Sentences tab to verify that that its behavior indeed matched what his mental model.

Task 8

The user went to the Study Focus sidebar, clicked the combo-box, and selected selected the word in the drop-down menu as expected.

Task 9

Because the user had seen the General Review option in the Study Focus combo-box while doing the previous task, he used the combo-box to return to General Review, and was able to fetch the sentences.

Task 10

The user had no issue with this task - as expected, he went to the Contribute Sentences tab, and inputted a sentence using his computer's Chinese IME, the English translation, and submitted the sentence.

User 3:

This user lived in China until her middle school years. She is fairly proficient in reading Chinese, but she still regularly encounters words she cannot read. Thus, she could benefit from using this application to gain exposure to and master more difficult vocabulary, and is thus part of the target user population.

Task 1

The user had no difficulty with this task. She used tab to switch through the fields, and pressed Enter to create her account.

Task 2

The user quickly selected the textbook and chapter from the combobox. However, after doing this, she was at a loss as to what to do. She explored the interface, first checking the "May appear in sentences" checkbox for a few words, then pressing the "Make study focus" button for a few words. She then clicked the buttons controlling the sort options. Then, she fetched a few sentences, but she still (correctly) believed that she hadn't accomplished the task. At this point, she gave up on the task, and we proceeded to the next one.

After the user experiment, we asked the user what the "May appear in sentences" checkbox did, and she indeed had the model of what it did. However, she cited that "Check words below" / "Uncheck words below" buttons were rather far away from the checkboxes that they mainpulated, hence the relation wasn't apparent to her.

Task 3

Upon seeing the word we had asked for, 好, the user remarked that this was a very common word, so the user first began clicking "fetch sentences" repeatedly, knowing that a sentence containing 好 was eventually going to appear. After a few sentences, however, she realized on her own that this wasn't the way we had intended her to accomplish the task, so she clicked on the search box, typed in the pinyin, and pressed the "Make study focus" button for the word, and fetched the sentence.

Task 4

As expected, the user clicked on the word to find out its prononciation and definition via the popup. She then clicked on the "Make study focus" button in the popup to make it the study focus, and fetched a sentence.

Task 5

As expected, the user typed in the pinyin into the search box, and unchecked the "May appear in sentences" checkbox.

Task 6

The user clicked on the "Close" button for the sentence, as expected.

Task 7

The user went to the "Closed Sentences" tab, and clicked the "Restore" button as expected.

Task 8

The user remembered that she had previously fetched a sentence containing 好, and that it was in the "Closed Sentences" tab, so she went to the tab, located the word in the sentence, clicked it, used the "Make study focus" button to make it the study focus. Immediately after she did so, however, her attention was drawn to the top-left side where the study focus had just changed, and she noticed that the widget displaying the study focus was actually a combobox, so she clicked it, saw that the history of study focuses was displayed there, and (correctly) remarked that the way we were probably expecting users to do the task was using that combobox. Thus, the user was able to discover the intended way to display previous study focuses on her own.

Task 9

The user clicked on the study focus combobox and selected General Review as expected.

Task 10

The user went to the Contribute Sentences tab and contributed a sentence as expected.

Usability Problems Identified during Testing

"Check words below" button has poor information scent

The largest usability issue we encountered was that all 3 of our users had difficulty discovering the "check words below" button in Task 2 - the first two users took a long time until they located and used it, and the third user never found it at all and gave up on the task. During our earlier tests in the paper and computer prototyping stages, however, users didn't have such difficulties locating the buttons or discovering what their functionality was. There were 2 changes that we ascribe to this regression. Firstly, because evaluators had remarked that the label  "Allow all words below to be displayed in sentences" was too verbose, we had shortened the label to "Check words below". However, this label has less information scent, because the button itself doesn't say what clicking it actually does, but rather which checkboxes it manipulates; the user has to read the label on the checkboxes, "May be displayed in sentences", in order to understand what the button itself does. The second issue is that we introduced the "Sort by" buttons, again following the advice of our evaluators that items should be sortable via metrics other than just Pinyin. However, because the sort by buttons were immediately above the list of words, then they further separated the "Check words below" button from the "May appear in sentences" checkboxes that they manipulated.

Our solution to this problem would thus be to adapt a label which has better infromation scent, and describes what clicking the button actually does (for example, "Allow all words below to be displayed in sentences", or perhaps a less verbose version of this), rather than requiring the users to figure out that the button manipulates the checkboxes, and then read the labels for the checkboxes below, before they can determine what the button actually does. We should also remove the "Sort by" buttons for the vocabulary - the buttons conflict with our goal of simplicity on the left sidebar, and they are only useful if the user already knows what word that he's searching for is, but in this case it would be more efficient for the user to simply type the word in to locate it via the incremental search functionality.

"Make study focus" button label assumes user knows what "study focus" means

As we saw from how User 2 didn't initially press the "Make study focus" button on task 3, he apparently didn't understand what "study focus" meant (namely, that the study focus must appear in fetched sentences). We could perhaps have replaced the term "study focus" in our interface with a term which better described what it meant for a word to be the study focus, for example, "word which must appear in fetched sentences".

Study focus history functionality has poor information scent

Although all our users were eventually able to discover that the study focus combobox could be pressed to display the study focus history (Task 8), Users 1 and 3 took a while to discover that it was a combobox which contained the previous study focuses as well, not just the current study focus. This is likely because the only affordance that is provided is a small arrow indicating that it's a combobox - this certainly doesn't pass the squint test. The arrow should be made larger and more apparent; additionally, the label "study focus" above the combobox should be changed to reflect that it can also be used to display the previous study focuses - for example, "current and past study focuses".

Ctrl-F is inconsistent with usual browser search functionality

As seen with User 2 on Task 3, our interface is inconsistent with the usual browser search functionality. This is of course an artifact of our decision to implement it in Silverlight rather than HTML. Although we can't have the browser search functionality work as expected without rewriting the interface, we could intercept the Ctrl-F shortcut and have it focus the search box, as searching for words is the most likely reason why the user would press Ctrl-F on the interface.

Reflection

Our design process was flexible, and revisited even our most core assumptions based on observations made in prototyping. For example, one of the decisions we made early in the design process was our model for which sentences would be displayed. Namely, initially we would not display sentences if they didn't have words that the user had selected as being allowed to be displayed; later we would allow sentences containing words the user hadn't selected as being allowed to be displayed, though these sentences would be displayed last, and the novel words would be highlighted - this noticeably reduced the frustration users had in fetching new sentences. Thus, our design process was holistic and considered all elements of the interface as up for revision, avoiding setting any elements of the design into stone.

The paper prototyping stage helped us establish our initial risk assessments and the features to focus our prototyping efforts on - namely, that the main difficulty was conveying to the user our model that sentences would be displayed only if the words they contained were specified as being displayable in sentences, and included the study focus if there was one. Thus, we focused our prototyping efforts mainly on the vocab and textbook selection, and less on logging in, reading and understanding the sentences themselves or contributing sentences, with which users had little difficulty in early prototyping stages. Of course, we still did include these tasks in our later user testing stages as well to ensure that no regressions were occurring.

The prototyping techniques we used were initially paper for the paper prototyping stage, due to the ease of use of paper. Because we got our computer prototype in a mostly-working state well before the initial GR4 deadline, we were also able to use computer prototyping often during the implementation stage, asking users to attempt to accomplish a specific task on the interface. Because we were using stock widgets and a WYSIWYG designer for the interface, we were able to iterate quickly even with the computer prototype, making this technique feasible and not costing a great deal of time. Sometimes we would make 2 implementations of the same C# interface with radically different GUIs (particularly in experimenting with the sentence viewer and sidebar), and compare them in tasks side-by-side. Had we been coding in HTML and Javascript, where the lack of decent WYSIWYG designers, lack of code modularity, and high time cost of implementation would have made such experiments prohibitively expensive, we would have been forced to use lower-quality prototyping techniques.

In evaluating the results of our user tests, we asked ourselves two questions: did the user understand the model, and did the user locate the appropriate widget for interacting with it. In the event that a user failed a task, our first goal was to determine which of these was the cause: was our model fundamentally being misunderstood, or was the widget simply placed in a bad location? We did this by asking users questions about our model after we finished the main tasks of the user test - for example, "what does it mean for a word to be the study focus"?

Perhaps one thing we should have focused on more during prototyping was specific labels for items. Because item labels are easy to change after implementation, we focused little on them during early stages of testing. However, though we had intended to revise them in later stages, we often pushed it aside for later, and thus many of the usability issues we encountered were partly related to users misunderstanding the meaning of labels.

  • No labels

1 Comment

  1. Strong design section and nice job highlighting the heuristics you used as well as how user test influenced your design.  Also really detailed user test - nice work!