Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Corrected links that should have been relative instead of absolute.

...

With regards to our sentence segmentation code, we use a lexicon-based segmentation approach. Thus, if a user contributes a new sentence, then provided that the words are in the lexicon, the sentence will be properly segmented, and the sentence can be found by making any of those words it is segmented into the study focus. Because our lexicon is extremely large and comprehensive (several megabytes), then the words in a sentence are usually found in the lexicon, so the sentence is properly segmented. Additionally, this means that when a user contributes a sentence, he doesn't need to segment the sentence himself, or provide any tagging for the words - he simply inputs the sentence, and it is automatically segmented and annotated with its vocab words. However, this automatic lexicon-based segmentation and word extraction also prevents users from being able to use out-of-lexicon terms in their contributed sentences (they can use them, but they will not be annotated with that vocab word).

The server-side code, which stores login credentials and the study focus, study focus history, words that are currently allowed to be displayed, sentences that are currently displayed, and contributed sentences for each user, is implemented as set of ASP.NET scripts which communicate with a MySQL database instance.

One design decision we made to simplify implementation was that all information - namely, the list of sentences, the words, and their translations and romanizations - would be retrieved from the server at login time. This was meant to ensure that no latency from communicating with the server would be seen once the user has been logged in and is using the main interface. However, one usability issue which resulted was that login times tend to be long due to all the downloading that is taking place. We managed to alleviate this by having shared portions of the data that would need to be downloaded by all the users (for example, the word database) be downloaded in the background while the user was still entering his login credentials at the login screen. Additionally, we have a loading screen with an animated progress bar to provide feedback for the user while the page is loading. Another issue which results from our decision to download all data from the server at login time is that if other users contribute a sentence, then the sentence won't be observed until the next time the user logs in.

Another design decision we made is changes made by the user (the study focus history, the words that are allowed to be displayed, the lists of displayed sentences, and the contributed sentences) are sent immediately to the server. This was done to ensure that the user doesn't have to press a "save" or "logout" button to preserve changes (which would have been a source of error if the user closed the browser without saving changes or logging out). However, because there would have been high latency (leading to an unresponsive interface) if the application waited for a response from the server in the GUI thread, then these requests are instead queued by the application and sent in a separate thread. An unintended consequence is that if the user makes a change (say, selects a new study focus) and immediately quits the browser before the server communication thread can send the update to the server, then that change will be lost when the user next logs in. However, because the system is still in a consistent state upon login (namely, it is in the state it was in immediately before the action occurred), and the user can easily observe the system state and redo the action, then this is not a severe issue.

Evaluation

Describe how you conducted your user test. Describe how you found your users and how representative they are of your target user population (but don't identify your users by name). Describe how the users were briefed and what tasks they performed; if you did a demo for them as part of your briefing, justify that decision. List the usability problems you found, and discuss how you might solve them.

We found our users by asking people who we knew were not experts in the language but had at least academic exposure. These people are part of our target population because they had a desire to review the vocabulary they already knew.

...

As seen with User 2 on Task 3, our interface is inconsistent with the usual browser search functionality. This is of course an artifact of our decision to implement it in Silverlight rather than HTML. Although we can't have the browser search functionality work as expected without rewriting the interface, we could intercept the Ctrl-F shortcut and have it focus the search box, as searching for words is the most likely reason why the user would press Ctrl-F on the interface.

Reflection

Our design process was flexible, and revisited even our most core assumptions based on observations made in prototyping. For example, one of the decisions we made early in the design process was our model for which sentences would be displayed. Namely, initially we would not display sentences if they didn't have words that the user had selected as being allowed to be displayed; later we would allow sentences containing words the user hadn't selected as being allowed to be displayed, though these sentences would be displayed last, and the novel words would be highlighted - this noticeably reduced the frustration users had in fetching new sentences. Thus, our design process was holistic and considered all elements of the interface as up for revision, avoiding setting any elements of the design into stone.

The paper prototyping stage helped us establish our initial risk assessments and the features to focus our prototyping efforts on - namely, that the main difficulty was conveying to the user our model that sentences would be displayed only if the words they contained were specified as being displayable in sentences, and included the study focus if there was one. Thus, we focused our prototyping efforts mainly on the vocab and textbook selection, and less on logging in, reading and understanding the sentences themselves or contributing sentences, with which users had little difficulty in early prototyping stages. Of course, we still did include these tasks in our later user testing stages as well to ensure that no regressions were occurring.

...