21022
Scaling up Fidelity Measurement of Autism Interventions in Schools

Friday, May 13, 2016: 10:55 AM
Room 310 (Baltimore Convention Center)
M. Pellecchia1, M. Seidman1 and D. S. Mandell2, (1)University of Pennsylvania, Philadelphia, PA, (2)University of Pennsylvania School of Medicine, Philadelphia, PA
Background:   Most evidence-based interventions for children with autism were developed in highly controlled laboratory settings, as were the fidelity measures that accompany them.  A growing body of research describes the importance of determining the best ways to scale up evidence-based practices in communities so that they are effective and sustain; less attention has been paid to the challenges and opportunities in measuring intervention fidelity on a large scale.  Inexpensive, regular fidelity measurement is an important component of any effort to test intervention effectiveness or to engage in quality improvement and assurance efforts

Objectives:   To present scalable strategies to measure common components of behavioral interventions for children with autism; describe direct observation fidelity measures used to assess implementation fidelity on a large scale, and; discuss methods to improve fidelity measurement and barriers to measuring implementation fidelity in real-world settings.

Methods: Direct observation fidelity measures were developed to assess the implementation fidelity to common components of comprehensive behavioral treatment packages for children with autism (discrete trial training, pivotal response training, visual schedules, positive reinforcement, and data collection) within a large community-based randomized trial.  Implementation fidelity of each behavioral component was measured by direct observation bimonthly in 73 classrooms for an academic year by trained research assistants.   Observers were trained to 90% reliability with a master coder, with continued field-based reliability checks throughout the year. Inter-rater reliability data also was collected at least 3 times throughout the year for each observer as an additional validity check.  

Results:   Data collection and analyses are ongoing.  Research assistants achieved and maintained reliability estimates of at least 90%, following a brief half-day training and practice with coding videos. Qualitative comparisons with more intensive fidelity measures indicate that reliability was achieved faster using these measures.  Field-based direct observation fidelity measurement for each component was conducted quickly, using a brief 10-minute sample of the targeted intervention, with high rates of validity.

Conclusions:   Preliminary data indicate that implementation fidelity for complex behavioral interventions can be measured accurately in natural settings using inexpensive and brief measures, and that it is relatively easy to train novice staff on these data collection procedures.  Accurate field-based fidelity measurement is a critical step for measuring the effectiveness and sustainability of evidence-based interventions delivered within community-based settings.  The procedures described offer a model for accurately measuring intervention fidelity in large scale that can be replicated for use in other large-scale community-based implementation trials.