Examining Treatment Implementation in Secondary Education Settings
Objectives: (Research Question) Do CSESA fidelity measures discriminate between sites implementing and not implementing program features?
Methods: As part of model development, investigators created implementation fidelity measures. Using a variation of the classic multi-method/multi-trait matrix process that Campbell and Fiske (1959) designed for building construct validity of assessments, the authors created an evaluation design to assess construct validity of fidelity measures. In this design, staff in high schools implemented two features of the CSESA program (see Table 1) and served as controls for the other two components. In most cases a feature had more than one measure (e.g., social had peer support, peer network and Social Competence Intervention components). Research staff collected implementation fidelity on all features/components in all settings. The rationale for the design is that the implementation measures should reflect high ratings of fidelity in schools where a specific feature is being implemented and low levels of fidelity will occur in schools where a specific feature is not being implemented. All combinations of the four components are being implemented in six schools1(e.g., transition and academics, social competence and independence, academics and independence). The fidelity measures were comparable in format, consisting of a four point Likert rating scales (0-3) that documented the degree to which individual practices were implemented with fidelity. Six high schools across the country (CA, NC1&2, TN, TX, WI) were enrolled in the study, involving 6-8 students with ASD at each school (N=43). Research staff monitored fidelity of all components through observations of students during instruction. The students were randomly selected from the pool of students in the study receiving a given intervention. 1One school was scheduled to implement the academic feature but did not.
Results: The small number of schools precludes statistical analysis, and descriptive comparisons of mean fidelity ratings appear in Table 2. As hypothesized, the fidelity measures were sensitive to implementation fidelity occurring in schools and appeared to discriminate schools in the control condition.
Conclusions: This poster presentation provides a case example of a program development process that focuses on measurement of implementation fidelity. Implications for iterative design of program development, measurement of fidelity, and use of implementation science will be discussed at the poster.