Design

We will present the final design of our interface by walking through a sample user interaction.

The user is first presented with the splash screen that has persisted since our first iteration. This splash screen is consistent with other Android applications and generally serves as a confirmation to the user that they clicked the right application and that everything is working correctly.

The user clicks the button to continue to his music library, which lists the compositions stored in the phone’s internal memory. The compositions are sorted in alphabetical order. When the library is empty, an indication is shown on the screen (“User Library (Empty)”), based on feedback from the heuristic evaluations.

Another request was a “long-click” functionality that gives additional information about the indicated song.

These details can only be edited in Edit Mode.

The new song settings screen is displayed when the user chooses to create a new composition. Default values were requested in the heuristic evaluations.

In Edit Mode, the user can add and remove notes as well as change clef, meter, key signature, title, and tempo. The ability to modify all of these song fields was requested in heuristic evaluation.

The user is given feedback when the buttons are clicked to add notes or flats / sharps / naturals to the screen. This was an original design choice because it gives the user a hint regarding the mode of the application.

When the user adds notes to the song, they snap vertically and horizontally for compact and consistent spacing. Accidentals can only be added to notes (not rests) and snap to a location near their note.

When the user places enough notes on the screen, they can move a subsequent screen and continue placing notes. We originally wanted the user to scroll the score to the left using a finger swipe, but we feel the current implementation is more consistent with the moded note placement.

The application supports menu screens that are consistent with all other Android application screens and are prompted by the phone’s physical menu button. The “Back” button is disabled in both Playback and Edit modes to preserve implementation invariants.

Placing dynamics (p, pp, mp, etc.) is achieved by clicking the dynamics button, choosing a particular dynamic from the list, and placing it on the screen. The screen greys out the areas where a dynamic is not allowed to be placed, making it very obvious to the user where to press.

Playback Mode gives a simplified view consistent with Edit Mode. An arrow points to each note as it is played.

Implementation

Android Limitations

Inherent to Android development, each screen represents a different Activity in the code. There is a defined Activity Lifecycle that gives a general structure to each of these files: there is some code executed on opening and other code executed on closing. All other actions are defined by listeners attached to View objects on-screen, so the application can react to clicks, drags, etc. The interfaces themselves are statically defined in xml files and can be updated dynamically at runtime.

Implementation Specifics

The application consumes minimal resources when resting (i.e. when the user is not immediately clicking or listening). The heavy lifting is done during Activity switches and immediately when the user creates events where listeners are attached.

We loosely followed the Model-View-Controller pattern. Our Model (data representation) is completely blind to the fact that it is holding data that is displayed to the user. The model holds song data in a setup as displayed below:

The View is mostly handled by the Android OS, which is responsible for drawing and updating the screen when items are added, removed, or modified. The OS dispatches events when the user interacts with the device.

Our Controller is divided between different listeners that wait for user events and code that updates the View and Model as necessary. The application is a giant state machine, which makes the listeners completely state-dependent. Their implementation takes the shape of large case blocks with different actions at each possible state.

Our Thoughts

The biggest implementation problems we had were due to the Android API. The Android Developer’s site was a lifesaver, but some of the quirks were difficult to work around. For example, when the screen is rotated on the device, the current Activity is killed and restarted. This means that our screens need to be robust and display consistent information even after being restarted.

We also ran into trouble with multiple API levels. Our first implementation used methods that are only available on the newest version of the OS, which no phones run currently. This meant that we had to scrap our drag-and-drop functionality and rewrite most of the code to use older method calls.

In all, we are very happy with the implementation and believe that after implementing what we learned in the user testing, our application will be up to par with publicly available applications.

Code is available on github here.

Evaluation

Our Users

We initially scheduled three Berklee students (Aubrey’s friends) to test our app, but two of the appointments fell through. At that point, we contacted MIT friends who had music composition experience.
Berklee student: A junior in Film Scoring and Music Composition.
MIT students: Two sophomores, both had taken Harmony and Counterpoint I. The Harmony and Counterpoint classes teach music theory and require a composition as a final project. One student is in Concert Choir and taking Harmony and Counterpoint II. The other is an aspiring music minor and one of the directors of Syncopasian, an a capella group for which she also arranges music.

The Berklee student is representative of our target user population: busy music students who have to compose a lot of music. The MIT students are representative because they know a lot about music and composition and had applications for the app in their lives.

User Test

Users sat with an Android phone and were asked to complete three tasks, each given after the previous was completed, while observers took notes in response to their thoughts while using the app.

Briefing:
After explaining the purposes of a usability test, the students were told that they were busy music students who did not know when inspiration would strike. Suddenly they had an idea come to them, but they didn’t have access to their computer or a piece of paper. They did have their Android phone.

Tasks:
1. Create a piece of music
2. Browse through files to find the sketch
3. Listen to the sketch

Usability problems

Ranked in order of importance, followed by proposed solutions:

Reflection

What We Learned

We learned how to program for Android and use the different APIs. We should have better applied what we learned in 6.005: designing the implementation before jumping straight in. Though we did sketch data structures and different classes and methods that we would need, we did not complete the code design before we started writing.
We learned how to design GUIs and how important human interaction with the product is during the design process. Iterative designing is really important, and we benefitted a lot from the feedback we received from users. We liked prototyping our product and showing people what we were working on and letting them play with it.

We enjoyed writing an app for a different user population. In the past, most of the software we have written is used to teach concepts and not very useful in the world, or we have been our own users. Designing for someone else made us stretch; it was especially interesting to design for music students who use computers very differently from how we use computers. Additionally, we learned how to properly administer a user test to efficiently get the most information. The tests emphasized that the user is always right, and we had to take their input into account when making design decisions.

External consistency is very important! We made several design decisions for our app that were not consistent with other music software suites. Though our app was specifically designed for sketching and not composing symphonies, we cut out some features that ultimately were very important to our users (such as measures).

Also, some of our less musically-inclined group members (Stephen) learned a lot about music!

What We Would Do Differently: Research and Testing

One of our major challenges was working with the Android development environment, and we would have benefitted from looking into code from other complete Android applications, as well as reading through the developer’s guides before beginning the implementation process. In the same vein, it would have been very helpful to have ~10 different Android phones for testing on different versions of the OS, screen sizes, etc. We were rudely awakened the first time we built the application to a phone and found that the emulator was a very weak simulator for the actual phone environment. During the development process we only had reliable access to one Android phone, so that severely hindered our progress.
We also would have benefitted from another iteration in the design process. Our user tests pointed out several usability problems that we could have fixed if we had another week to develop the final implementation.

All in all, this project was a very positive experience for our group. We got the chance to go through a few iterations of the spiral model and learned the struggles of pleasing a user population we know nothing about. The app that we created is something that we are all proud of and would definitely download for our own personal use. Several people (friends, family, professors) have expressed interest in the app because of its novel idea. This class taught us about the iterative design process for user interfaces and we had the chance to use that knowledge in our project.