Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The server-side backend is written using ASP.NET scripts which communicate with a MySQL database instance. Initially, our original computer prototype simply read the sentence and word databases from built-in text files instead of communicating with the server, so we did not support user accounts until late in the implementation stage. However, with the exception of the login screen, this did not pose a problem for testing most of the functionality of the user interface.

With regards to our sentence segmentation code, we use a lexicon-based segmentation approach. Thus, if a user contributes a new sentence, then provided that the words are in the lexicon, the sentence will be properly segmented, and the sentence can be found by making any of those words it is segmented into the study focus. Because our lexicon is extremely large and comprehensive (several megabytes), then the words in a sentence are usually found in the lexicon, so the sentence is properly segmented. Additionally, this means that when a user contributes a sentence, he doesn't need to segment the sentence himself, or provide any tagging for the words - he simply inputs the sentence, and it is automatically segmented and annotated with its vocab words. However, this automatic lexicon-based segmentation and word extraction also prevents users from being able to use out-of-lexicon terms in their contributed sentences (they can use them, but they will not be annotated with that vocab word).

One design decision we made to simplify implementation was that all information - namely, the list of sentences, the words, and their translations and romanizations - would be retrieved from the server at login time. This was meant to ensure that no latency from communicating with the server would be seen while the interface was actually being used. However, one usability issue which resulted was that login times tend to be long due to all the downloading that is taking place. We managed to alleviate this by having shared portions of the data that would need to be downloaded by all the users (for example, the word database) be loaded in the background while the user was still at the login screen. Additionally, we have a loading screen with an animated progress bar to provide feedback for the user while the page is loading. Another issue which results from our decision to download all data from the server at login time is that if other users contribute a sentence, then the sentence won't be observed until the next time the user logs in.

...