Individuals with Autism Spectrum Disorders (ASD) frequently engage in stereotypical motor movements, but they are not well understood. One reason why stereotypical motor movements may not be as thoroughly studied is because efficient and accurate tools for measuring these behaviors are not available to the research community.
We previously demonstrated that wireless 3-axis accelerometers and pattern recognition algorithms could accurately detect (.90) stereotypical hand flapping and body rocking in six children with ASD both in laboratory and classroom settings (Goodwin, Intille, Albinali, Velicer, 2010). While promising, this work relied on offline video annotations by clinical experts to train recognition algorithms. In real-world environments, expert annotators are rarely available. Thus, in the current work, we sought to assess how well a non-expert could use software on a mobile phone to annotate stereotypical motor movements in real-time for classifier training, and evaluate the impact these annotations have on algorithm performance.
We ran four 30-minute data collection sessions alternating between the laboratory and classroom (Lab1, Class1, Lab2, Class2) with one of the participants involved in Goodwin et al. (2010). We undertook data collection in both laboratory and classroom settings to determine the accuracy of annotation and recognition performance across both constrained and real-world environments. During these sessions, one expert annotator (a trained behavioral scientist familiar with the child) and one non-expert annotator (a teacher not familiar with the child) simultaneously coded start time, end time, and type of stereotypical motor movement using a Windows Mobile phone running custom annotation software. Pressing a button once on the phone marked the start of a corresponding stereotypical motor movement, which included three behaviors: flapping, rocking, and simultaneous flapping and rocking. Pressing a button a second time on the phone marked the end of a corresponding stereotypical motor movement.
Overall accuracy (A), precision (P), and recall (R) for the algorithm across all sessions when training and testing using expert annotations (A: .82, P: .80, R: .79) and non-expert annotations (A: .78, P: .79, R: .76) was relatively high. Accuracy achieved for different sessions for both expert (Lab1: .85, Lab2: .89, Class1: .82, Class2: .74) and non-expert annotations (Lab1: .79, Lab2: .86, Class1: .81, Class2: .69) was also relatively high. The non-expert annotator had an average delay of 3.25 seconds in labeling movement onsets and 1.23 seconds in labeling movement offsets compared to the expert. The frequency of delays in the onset (5.25) was much lower than the frequency of delays in the offset (9.25). However, this did not seem to impact performance of the activity recognition algorithm.
Our preliminary results suggest that a non-expert conducting real-time annotation using mobile phones is a viable approach for gathering person-dependent data needed to train accurate pattern recognition algorithms in this domain. Enabling non-experts to easily train pattern recognition algorithms capable of automatically detecting stereotypical motor movements could advance autism research and enable new intervention tools for the classroom that help children and their caregivers monitor, understand, and cope with this potentially problematic class of behavior.