ArtBark is designed with two users groups in mind: Artists who are looking for detailed, meaningful feedback about a work, and reviewers who will provide the feedback. As many users will play both roles, the artist’s upload interface, the reviewer’s interface and the artist’s review interface were designed to flow together seamlessly for learnability and consistency.
We decided to use a very simple account model for ArtBark. The account design is very similar to other highly efficient web applications such as Imgur, where users do not need to register an account in order to share a photo; rather, content is shared through unique URLs. In our final user evaluations, most of the users appreciated the straightforward and no-nonsense security model.
The login page was designed to be very simple and straightforward. In our user testing and heuristic evaluations, a number of users reported being confused by the multiple login options we had provided (we had originally given users the option of viewing existing art by personally inputting their email address and the art name, as well as the option of uploading new art). As a result, we chose to present only the most basic login option at the login page. Existing artists and reviewers are only able to access existing works through personalized URLs.
As described, once the artist has finished uploading his/her art, he/she is presented with links for each group that will be viewing the art.
When a reviewer navigates to one of the aforementioned login URLs, he/she will be presented with a customized interface that minimizes confusion. As shown below, a reviewer only sees the single option of viewing the specified work of art. Logging in will take the reviewer to the reviewer interface.
If an artist navigates to the URL for his/her uploaded work, he/she will immediately be taken to the artist interface.
The artist who wants feedback on their art must first submit that art to our system. After logging in and indicating that they are going to upload some art, they are taken to the upload page. The upload page features two ways to upload an image: drag and drop, and an upload button. Either way, after optionally giving their art a title and watching an image upload status bar, they are whisked away from the upload page to the Review Group Privacy Setup page.
The Review Group Privacy Setup page is a page whose existence fell out of our users’ needs, wants, and demands. Its basic purpose is to let users retain control over who gets to see others’ comments on their work. This helps to resolve social dilemmas that may occur when reviewer groups with different levels of social status and social power by letting the user decide to sidestep any potential social problems.
The page features draggable “pills” that represent the different target user groups of our application. The place where they drop the pill decides the inter-group visibility of the comments.
Once the user continues on from the privacy settings page, they arrive at the Tags page. This page fulfills users’ need to direct the type and sort of feedback that they receive from their peers and mentors. Tags are simply short words or phrases, much like those seen on sites like Flickr or Twitter, that suggest a theme for feedback.
The reviewer is expected to receive a link from the artist requesting a review, and they simply enter their email address--an account will be created on-the-fly if it doesn’t already exist to give a low-barrier to entry.
The user is presented with the most recent version of the work (user testing showed that having access to multiple versions was confusion), and any public comments that have already been entered (in this case there aren’t any yet). Reviewers can provide a high-level rating, general comments, or more specific annotations to cover all levels of feedback the artists we interviewed were interested in.
Rating stars are directly manipulated: highlighting on hover, and filling-in on click.
Users can make general comments about the piece that will appear in the container on the right-side. The artist-defined tags help to guide the reviewer’s feedback without restricting it. We had originally required users to categorize their comments based on a predefined set, but in prototyping we discovered that reviewers found this limiting and stressful. Tags offer a more flexible, but still guided user experience.
When the comment appears in the comment panel, the user is also presented with some embedded editing tools. Icons are used to give a quick information scent: The pencil allows users to edit the comment text, and the X-icon allows users to delete comments. When the user deletes a comment (X is highlighted on hover below), a message appears at the top of the interface giving the user the option to undo the action, as per users’ requests in testing. This message does not interrupt user interaction, like a dialog would, but fades after some time, again for flexibility. The user can dismiss the notification manually. This behavior is very similar to the email deletion behavior in Gmail.
Preceding the comment is a pin - on hover, a pointer cursor reveals it to be draggable - users can drop this pin anywhere on the image to associate this comment with a particular part of the art.
In this sense, a pin is an optional feature of a comment. We had originally treated “general comments” and “annotations” as separate entities, but early prototyping revealed that users felt this model to be confusing and inflexible, and difficult to discover in the case of annotations. This integrated, flexible, approached proved much more intuitive in user studies.
Users can also create pinned comments directly by clicking anywhere on the image--a crosshair cursor indicates this option on image hover.
The user can edit pinned comments in-place by double-clicking (shown below) or in the comment display for flexibility and consistency.
When the user hovers over pinned comments in the comment panel, the corresponding pin will highlight, so the user need not remember the correspondences (recommended by heuristic evaluation). Likewise, the pinned comment in the panel will highlight when the user hovers over the pin.
Feedback will automatically be visible to the artist, but can be manipulated at any time. Originally, we allowed users to “Push” or “Post” comments to the artist, but they found the automated model more efficient and less confusing in testing.
The artist can arrive at this interface through the direct link that is provided from the end of the upload interface. The comments and an average rating appear on the artist’s panel, which is designed to be a static-version of the reviewer interface, without the capability to add comments.
We allow the artist to filter the comments on his/her art using toggle buttons, as seen on the right. While our original design used tabs to allow users to filter the comments by group, we received feedback during our paper prototypes that the tabbed interface was
confusing and inconsistent with the toggle buttons that we used for the tags. Users in subsequent stages of testing felt that the filters were easy to understand and use.
Our interface was implemented in Javascript and HTML. We used the Bootstrap Javascript library for layout and styling, and JQuery for ease of implementation. Drag-and-drop features (i.e. reviewer group set-up and pin manipulation) were implemented using the JQuery UI library.
In general, implementing the interface was straightforward. We did run into some limitations, however, when it came to manipulating the work of art. We originally wanted to allow reviewers to magnify sections of the art, but this proved difficult to implement. We were also unable to go back and make pin positions relative so that the interface could handle window resizes. Proper alignment of draggable objects also proved challenging.
Our server is implemented in NodeJS. Its primary purpose is to store the data of various users while their session is ongoing and to share that data between the people seeking feedback and the people giving feedback. All of the user data is stored in a JSON object except for the uploaded art file, which is stored on the regular file system. The JSON data store is periodically stored to the file system as well, to ensure some degree of fault tolerance in the system. Static files are served using a NodeJS module, rather than using a heavier approach like Apache.
Our account model is very simple. It is assumed that reviewers and artists who have the direct URL to view a piece of art should have permission to view it. A direct URL to an uploaded work of art will contain the work’s title and the artist’s name (which is assumed to be a unique combination) as parameters in the URL. Once a reviewer logs in through this URL, the corresponding image and comments will be retrieved from the server to populate the reviewer’s review page. Similarly, when an artist logs in through a direct URL, the appropriate art image, comments and filtered will be retrieved from the server to populate the artist review page.
While this approach limits the security of our application and makes it open to the possibility of unrelated users guessing the link to a given piece of art, we decided that our design should be simple, efficient and unrestrictive. A more secure design may use some sort of encryption for the art title and artist name, such that an individual who knows the title and the name cannot easily guess the correct URL.
Our filtering model is implemented through the use of classes on each comment. When a comment is tagged or created by a specific group, the name of the tag or group is added as a class to the comment, allowing us to efficiently and easily change the visibility of relevant comments using the filter buttons.
All actions that generate, edit, or destroy data visible to other users of the system, or to that same user later, are saved to the application server’s data store. This means that this data is immediately available to all other users of the application. In the future, we would plan to implement live updating on the review and reviewer pages, so that updates to things like comments, scores, and annotations would show up without needing to refresh the page.
Users were found through personal contacts:
They are representatives of our target populations.
We decided not to do a demo, not to bias the learnability of the interface beyond the briefing / tasks.
Artists' briefing
"You are requesting feedback about a digitized visual art work that you have been working on. You use ArtBark to collect tangible, actionable feedback to improve your art piece. This comprises two stages that are separated in time: 1) uploading the work and requesting the customized feedback from specific individuals and 2) browsing through the feedback after it has been received."
Artists' tasks
Commenters' briefing
"Your feedback about a visual art work is being solicited. You want to help the requester improve the art piece you are about to see."
Commenters tasks:
User 1: Director of Artist’s Resource Center and professor at the School of the Museum of Modern Arts in Boston.
User 2: art drawer who mostly draw digital (iPad).
Artist setUp
Reviewer
Artist review
User 3: Advanced digital Photographer (very computer/tech savvy), tested over Chrome on MacBook
Setup
Reviewers
Artist view:
User 4: RISD-educated Digital Media Artist and Painter with a programming background
Set-up:
Reviewer Interface:
Artist Interface:
From users in the class (studio)
General (Both artist and commenter view):
Setup
Artist Interface
Reviewer Interface
The various stages of prototyping that we performed were very useful for arriving at our final design. There were a lot of features that we removed and a lot of features that we introduced based on user feedback. We encouraged our prototype testers to think aloud and give us any feedback that they could think of, which resulted in many widely-varying comments that were very useful for improving all aspects of ArtBark. Overall, we learned that multiple stages of prototyping is extremely useful for testing features.
One of the hindrances in our design process was the difficulty of prototyping our interface, particularly on paper. ArtBark is intended to be a highly-interactive application, with features such as drag-and-drop, hover effects, and filtering. We found that such features were difficult to accurately and efficiently represent in a paper prototype. During evaluation sessions for our paper prototype, we often had to explain to our testers the various hover, click and drag effects that were possible in our interface. While all of our users understood our explanations and were able to try out each effect, showing each interaction using slips of paper was slow and clumsy. Such an approach did not allow our testers to freely explore our interface as a new user would. While paper prototyping was still useful, we feel that ArtBark may have benefited from another, longer round of computer prototyping, where we would’ve been able to better represent the possible interactions with our interface.
We feel that our approach for incorporating feedback was very effective. We received a lot of feedback at regular intervals throughout our design process, so we regularly compiled our feedback into a list and then triaged each piece of feedback. This way, we were able to optimally implement the most impactful and time-efficient changes; by the same token, we were also able to cut the more unimportant and time-consuming features. An indication of our success in incorporating feedback is the fact that testers in subsequent stages of user testing rarely brought up complaints about the same set of features.
We found the iterative design process to be very useful. We were able to continually update and improve ArtBark based on user feedback, and see that the results of our updates were well-received by our testers. It was very rewarding to look back on our first GR assignments and see how much our app had improved.