Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Corrected links that should have been relative instead of absolute.

...

With regards to our sentence segmentation code, we use a lexicon-based segmentation approach. Thus, if a user contributes a new sentence, then provided that the words are in the lexicon, the sentence will be properly segmented, and the sentence can be found by making any of those words it is segmented into the study focus. Because our lexicon is extremely large and comprehensive (several megabytes), then the words in a sentence are usually found in the lexicon, so the sentence is properly segmented. Additionally, this means that when a user contributes a sentence, he doesn't need to segment the sentence himself, or provide any tagging for the words - he simply inputs the sentence, and it is automatically segmented and annotated with its vocab words. However, this automatic lexicon-based segmentation and word extraction also prevents users from being able to use out-of-lexicon terms in their contributed sentences (they can use them, but they will not be annotated with that vocab word).

The server-side code, which stores login credentials and the study focus, study focus history, words that are currently allowed to be displayed, sentences that are currently displayed, and contributed sentences for each user, is implemented as set of ASP.NET scripts which communicate with a MySQL database instance.

One design decision we made to simplify implementation was that all information - namely, the list of sentences, the words, and their translations and romanizations - would be retrieved from the server at login time. This was meant to ensure that no latency from communicating with the server would be seen once the user has been logged in and is using the main interface. However, one usability issue which resulted was that login times tend to be long due to all the downloading that is taking place. We managed to alleviate this by having shared portions of the data that would need to be downloaded by all the users (for example, the word database) be downloaded in the background while the user was still entering his login credentials at the login screen. Additionally, we have a loading screen with an animated progress bar to provide feedback for the user while the page is loading. Another issue which results from our decision to download all data from the server at login time is that if other users contribute a sentence, then the sentence won't be observed until the next time the user logs in.

...

As seen with User 2 on Task 3, our interface is inconsistent with the usual browser search functionality. This is of course an artifact of our decision to implement it in Silverlight rather than HTML. Although we can't have the browser search functionality work as expected without rewriting the interface, we could intercept the Ctrl-F shortcut and have it focus the search box, as searching for words is the most likely reason why the user would press Ctrl-F on the interface.

Reflection

Our design process was flexible, and revisited even our most core assumptions based on observations made in prototyping. For example, one of the decisions we made early in the design process was our model for which sentences would be displayed. Namely, initially we would not display sentences if they didn't have words that the user had selected as being allowed to be displayed; later we would allow sentences containing words the user hadn't selected as being allowed to be displayed, though these sentences would be displayed last, and the novel words would be highlighted - this noticeably reduced the frustration users had in fetching new sentences. Thus, our design process was holistic and considered all elements of the interface as up for revision, avoiding setting any elements of the design into stone.

The paper prototyping stage helped us establish our initial risk assessments and the features to focus our prototyping efforts on - namely, that the main difficulty was conveying to the user our model that sentences would be displayed only if the words they contained were specified as being displayable in sentences, and included the study focus if there was one. Thus, we focused our prototyping efforts mainly on the vocab and textbook selection, and less on logging in, reading and understanding the sentences themselves or contributing sentences, with which users had little difficulty in early prototyping stages. Of course, we still did include these tasks in our later user testing stages as well to ensure that no regressions were occurring.

...