22861
User-Centered Design for Research Data Management Software: Evaluating and Improving Designs By Testing with Users (Early and Often)

Friday, May 13, 2016: 10:00 AM-1:30 PM
Hall A (Baltimore Convention Center)
H. Agnew1, J. A. Salim2, T. Schantz1 and L. Rozenblit1, (1)Prometheus Research, LLC, New Haven, CT, (2)Marcus Autism Center, Atlanta, GA
Background: The Marcus Autism Center uses RexDB, an open-source platform for data management, to collect, organize, and analyze research data. Marcus sought to tailor their RexDB application to support specific needs regarding participant enrollment and visit tracking.

Objectives: The design team identified several constraints: (1) a short development schedule, (2) a modest budget, and (3) an established technology framework. We sought to design a technically feasible user interface that was demonstrably (a) useful (exclusive of unnecessary features, and inclusive of all necessary features), (b) usable (as measured by efficiency, learnability, memorability, error prevalence, and user satisfaction) [Nielsen, J. (2012) Usability 101: Introduction to Usability. http://www.nngroup.com/], and (c) appealing to users.

We aimed to be confident in the design's probability of success by testing prototypes before beginning the more costly and less flexible programming phase.

Methods:

After defining the client's needs, we brainstormed potential screen layouts and interaction patterns. We generated several alternative designs using inexpensive paper drawings, selecting the most promising based on their technical feasibility and similarity to previously validated design patterns. We converted the selected designs into digital prototypes.

We then conducted user testing [Nielsen, J. (1994) Usability inspection methods. Conference Companion on Human Factors in Computing Systems, p.413-414.], in which eventual users completed realistic tasks with the prototypes. We observed interactions and asked questions to better understand their mindset and expectations (e.g., "What do you think you can do here?"; "What would you do next?"). We noted the frequency and cause of user errors.

At the end of each session, we asked:

  1. How useful is this workflow to you?

  2. Is there anything missing?

  3. Is there anything here that you don't need?

  4. How would you rate the application's ease of use?

Following each round, we updated our design and repeated the testing process until we believed the design was strong enough to move to the programming phase.

Results:

After programming and launching the interface, we measured perceptions of the application's usefulness, usability, and appeal ("look and feel"). We asked 52 users to rate the following three aspects on a 0 (worst) to 10 (best) scale (average responses shown):

1. Available features (Does it have the features you need?) 7.5 / 10

2. Ease of use  7.7 / 10

3. Look and feel  8.6 / 10

Conclusions:

By iterating and testing with users, we designed a useful and usable interface. The application was successfully launched and adopted without the need for a costly redesign.

The final design was substantially different from the initial version. For example, some screens were removed completely because they were found to be unnecessary. The application's navigation was re-designed from scratch.

Teams designing complex web applications for domain-specific uses, such as autism research data management, may benefit from the user testing approach described here.