26056
Design and Efficacy of a Wearable Device for Social Affective Learning in Children with Autism

Saturday, May 13, 2017: 12:00 PM-1:40 PM
Golden Gate Ballroom (Marriott Marquis Hotel)
N. Haber1, A. Kline2, C. Voss1, J. Daniels1, P. Washington1, A. Fazel1, T. De1, C. Feinstein1, T. Winograd3 and D. Wall4, (1)Stanford University, Stanford, CA, (2)Pediatrics, Stanford University, Stanford, CA, (3)Computer Science, Stanford University, Stanford, CA, (4)Stanford University, Palo Alto, CA
Background: Children with autism struggle to recognize facial expressions, make eye contact, and engage in social interactions. The best-known intervention, applied behavioral analysis, relies on teaching these skills in a clinician’s office, removed from where they will actually be used and relying on artificial tools like flashcards. Their delivery is increasingly bottlenecked as the number of available therapists lags well behind by the number of children in need of care.

Objectives: We have developed a tool for automatic facial expression recognition that runs on smart glasses and delivers social cues to people with autism. The system employs the glasses’ outward-facing camera to read a person’s facial expressions by passing video data to an Android app for machine learning-based emotion classification, giving the child wearer real-time social cues. We, through an in-lab pilot and an at-home design trial, sought to refine both the interaction experience and outcome measures that can then be employed to track progress in a more controlled trial.

Methods:

For the in-lab pilot, we tested an interface mockup on 20 autism and 20 control participants. Each of the participants was fitted with the mockup and a custom-built head-mounted pupil tracker while sitting in front of a computer screen. The screen showed faces for 6 seconds alongside two alternating non-social standardized “distractor” images. Subjects attempted to identify the emotion of faces on the screen first without emotion feedback, then with feedback provided via the heads-up display and/or audio system of the unit, and again without feedback.

For the at-home design trial, we worked exclusively with children with ASD. We asked 14 families to take the working prototype home and use it for at least 20 minutes at least 3 times per week. We tracked behavioral progression through the continuously gathered device data and the Social Responsiveness Scale (SRS), a parental report measure.

Results:

In-lab results showed that children adapted quickly to wearing the device; audio feedback promoted a shorter learning curve. Both groups showed improvements in batches 2 and 3. Preliminary qualitative analysis of the eye tracking data collected in this study agreed with the finding that children with autism focus their gaze on the mouth as opposed to the eyes when looking at faces.

For the at-home trial, six participants moved from one range of autism on the scale to a less severe one (4 from “severe” to “moderate”, 1 from “moderate” to “mild”, and 1 from “mild” to “normal”). The mean total SRS score (a higher score indicates a higher severity of ASD) during the on-boarding sessions was 79.36 (SD=34.92), while the mean total SRS score during the trial conclusions was 72.69 (SD=10.67). This preliminary exploration produced over 9,000 minutes of social video and sensor data, a dataset bigger than any other of its kind.

Conclusions: These trials provided valuable data on the use of such a tool. These results together provide us with a therapeutic tool for which we have strong hypotheses that can now be measured with a more controlled trial.