19615
Are We Failing the M-CHAT? Self-Assessment in a Diverse Community Sample
Objectives: To investigate the implementation fidelity of autism screening via the M-CHAT/F (and/or M-CHAT-R/F) conducted in the primary care clinics of a large urban hospital.
Methods: Electronic health records for 18 and 24/30 month pediatric well child visits during a one-month study period were manually reviewed to extract autism screening implementation parameters.
Results: The review yielded a sample of 281 eligible clinic visits serving children who were majority male (60.9%) and diverse (42.7% African American, 31.7% Hispanic, 10.3% White). Primary care providers documented that 4.3% of visits included a positive screen based on the M-CHAT; in contrast, re-scoring of parent-completed M-CHATs yielded 13.7% of visits with a positive screen (based on both critical item and total score approaches). No visit documented use of the structured M-CHAT follow-up interview, or any components of the M-CHAT-R/F. Providers’ sensitivity with the M-CHAT was 0.214 (identifying 6 of 28 positive screens). Providers documented a referral for early intervention or evaluation services in 50% of cases (6 of 12) when a positive screen was identified in clinic; of children who screened positive based on rescoring of the parent M-CHAT, only 14.3% (4 of 28) were referred.
Conclusions: Manual chart review allowed for direct evaluation of the clinical implementation and interpretation of the M-CHAT in a large urban hospital. Despite routine administration of the M-CHAT at 18 and 24/30 months, providers failed to identify over 75% of children who screened positive on the measure. Pediatrician surveillance (i.e., clinical judgment without the aid of standardized tools) is well known to identify only 20-30% of children with developmental delays, and employing a standardized tool incorrectly does not appear to add incremental value in clinical practice. This pilot project demonstrates the viability of quantifying implementation and interpretation fidelity for autism screening. Efforts are underway to use this methodology to monitor quality improvement activities focused on provider education and training, as well as systems-level changes to facilitate standardized autism screening. Future studies are necessary to determine the extent to which other hospitals who use measures like the M-CHAT fail to monitor implementation fidelity, as well as how monitoring improves functional adherence to AAP guidelines. Research with developmental screening tools may also need to routinely include implementation fidelity data to better characterize results in community samples, given consistent (and even explicit) omission of the structured follow-up in recent publications.