21391
Wearable Devices for Reading Facial Expression and Detecting Face-to-Face Behavior of Children with ASD

Thursday, May 12, 2016: 11:30 AM-1:30 PM
Hall A (Baltimore Convention Center)
K. Suzuki, Center for Cybernics Research, University of Tsukuba, Tsukuba, Japan
Background:

The social imaging technologies to identify and represent social behaviors is introduced. We reported several wearable devices to measure interactions among people, e.g. physical touch (Iida et al., 2011) and group dynamics (Miura et al. 2013). In this study, a wearable device for detecting smile based on biosignals, and the feasibility study with children with ASD is introduced. Monitoring signals from facial muscles can be a way to measure emotional information (Niedenthal et al. 2009) but children with ASD might show limited observable expression when they are enjoying an activity (Welch 2012). A preliminary study with a child demonstrated that our wearable device effectively correlated positive social behaviors of the child with ASD increased when the smiles increased,  and that negative social behaviors decreased when the smiles increased during an AAA (Animal-Assisted Activities) session (Funahashi et al., JADD 2013).

Objectives:

We proposed a wearable device (Gruebler et al. 2014) to detect the smiles of children with ASD using electromyographic signals from the side of the face. We conducted a two-year study to analyze the facial expressions of children and compare data by our device with the evaluation of interventions by a specialized medical examiner while the children experienced AAA  with small dogs.

Methods:

The device we designed is compact so the children can wear it during long periods of time, it allows complete mobility of the child in their environment. Independent Component Analysis (ICA) and Artificial Neural Network (ANN) is used to classify the signals from different muscles groups in the face. Thirteen children with ASD (10 boys and 3 girls, mean age = 12.8) and 8 normal healthy children (3 boys and 5 girls, mean age =12.3) were recruited to participate in, which they did voluntarily. Four children with ASD did not want to put on the wearable device during the session, but 9 children with ASD and all 8 control children had no difficulty doing so. Using the typical smiles and the baseline, suitable EMG segments were selected for the training set to perform ICA and to train the ANN. The change in the total duration of each child’s smiles in both groups are analyzed.

Results:

The training set was prepared for each participant, and the accuracy of smiling and non-smiling detection was on average 99.84% SD=0.22 and 99.21% SD=0.96, respectively. We found positive correlation between ME and device for threshold levels between 0.65 and 0.95 in all participants through 4 successive sessions and the average number of smiles of each group

Conclusions:

Our findings were consistent with our previous finding using a child with ASD under the same experimental condition. The wearable device can be used to code facial expressions during any other interactions such as face-to-face interactions and quantifying people's facial expressions while on-the-go regardless of context. We have also developed a wearable device using IR sensor for detecting face-to-face time and duration based on head orientations of children. Further investigations include the analysis spontaneous smile and behavior such as with touch, reaching, and face-to-face.