Note: Most Internet Explorer 8 users encounter issues playing the presentation videos. Please update your browser or use a different one if available.

Can Internal Metrics of Reporting Bias Enhance Early Screening Measures?

Thursday, 2 May 2013: 15:15
Chamber Hall (Kursaal Centre)
14:30
C. M. Taylor1, A. Vehorn2, H. Noble2, J. A. Crittendon1, W. A. Loring2, C. R. Newsom3, A. Nicholson2 and Z. Warren1, (1)Vanderbilt Kennedy Center, Nashville, TN, (2)Vanderbilt University, Nashville, TN, (3)Pediatrics, Psychiatry, & Psychology, Vanderbilt University, Nashville, TN
Background: The use of screening tools to identify children at risk for an Autism Spectrum Disorder (ASD) is recommended by numerous practice organizations including the American Academy of Pediatrics (Johnson & Meyer, 2007). The Modified Checklist for Autism in Toddlers [M-CHAT] (Robins et al., 2001) is one of the most widely-used screeners.  Despite wide use and building data suggesting the ability of this instrument to capture many children at risk for ASD, concerns remain regarding the sensitivity and specificity of the M-CHAT in certain contexts (Zwiagenbaum, 2011). Specifically, research on parent reported ASD instruments at later ages has been demonstrated to relate to non-specific ASD behavioral concerns (Warren et al., 2012) and parenting stress (Weitlauf et al., 2012).  Thus, the psychometric value of ASD screening instruments may be impacted by reporting characteristics or response biases.  If so, internal metrics indicating potential reporting concerns (e.g., validity questions indicating over- or under-reporting patterns) that are used on many other self-report instruments (e.g., MMPI) may aid early screening initiatives.

Objectives: The goal of this study was to examine the potential value of validity questions (divided into faking-good/under-reporting and faking-bad/over-reporting categories) to improve accurate identification of children at risk for ASD with the M-CHAT.

Methods: Participants in the study were caregivers of children (n=145), 36 months of age or younger participating in first-time diagnostic appointments across our clinical research center.  Caregivers were asked to fill out an M-CHAT as well as an additional questionnaire containing six response pattern/validity questions. Validity questions, pulled from other parent report screening instruments, included items that most parents answer in the same way, regardless of child diagnosis. Clinical diagnosis of the children was made by a research-reliable, licensed clinician with a specialty focus in autism. Diagnostic assessment included clinical interview, cognitive assessment, adaptive behavior assessment, and information from the Autism Diagnostic Observation Schedule (ADOS; Lord et al., 2000).  As a result of the evaluation, eighty-six children were diagnosed with an ASD and fifty-nine children were not given an ASD diagnosis.  

Results: Eighteen children were identified as typically developing, all of whom passed the M-CHAT. Of the eighty-six children diagnosed with an ASD, 14% (n=12) passed the M-CHAT (false negative). Forty-one children received a developmental diagnosis other than ASD. Sixty-three percent (n=26) of these children who received an alternate developmental diagnosis failed the M-CHAT (false positive). When faking-good questions were taken into account, false negatives within the ASD sample decreased by 50%. When faking-bad questions were taken into account, false positives within the other developmental diagnosis sample decreased by 34%.

Conclusions:  The addition of validity items significantly decreased false positives and false negatives in toddlers participating in assessment for autism. While screeners are useful to identify children who may have autism, there are still concerns about the sensitivity and specificity of these measures. In the future, we should consider adding validity questions to ASD screeners to identify parents who may be over- or under-reporting symptoms.

 

 

| More