19723
Reliability of Direct Behavior Ratings – Social Competence (DBR-SC) Data: How Many Ratings Are Necessary?
Currently in the field of special education, federal mandates require, the implementation of evidence based practices (EBPs) for students with disabilities (Cook & Cook, 2011). One specific area of education for which the need for clarity regarding appropriate EBPs has been increasingly felt, is for students with autism. As professionals seek to identify and implement strong manualized interventions targeting this population, it becomes increasingly necessary to effectively monitor student’s progress, providing implications for tailored dosage to meet individualized needs. Systematic direct observation (SDO) is a common method for ongoing assessment used to monitor behavior and associated intervention effects. While SDOs are generally considered the most appropriate and accepted means of measuring student behavior, SDO produces a number of limitations. A related measure, Direct Behavior Rating (DBR) is a scale that provides a direct rating of behavior(s) immediately following the observation of a student in regard to domains identified as significant moderators of student success. This measure is meant to be brief and immediate in nature, allowing for repeated use over time and implications to inform practice, similar to SDO. As EBP implementation increases, a feasible means of effectively and efficiently monitoring progress and informing practice is vital.
Objectives:
The Social Competence Intervention for Adolescents (SCI-A) is a targeted EBP designed to meet the specific social needs (Stichter, et al., 2010) of youth with ASD or similar challenges. The current study examined the reliability of DBR data, which were used to evaluate student progress in response to SCI-A. Of particular interest was the number of DBR data points required to reach adequate levels of reliability for low stakes decisions (.80).
Methods:
This project examined the reliability of five DBR targets, including disruption, engagement, respect, and appropriate social interaction with teachers and peers. Participants included 57 middle school students participating in a larger RCT evaluating SCI-A efficacy. Classroom teachers completed ratings on a daily basis with regard to student targets. Ratings were at pre, mid, and post intervention.
Results: Findings were in accordance with previous DBR-related research, which suggested a small number of ratings were needed to approximate appropriate levels of reliability (Briesch, Chafouleas, & Riley-Tillman, 2010; Chafouleas et al., 2010). Specifically, results indicated the number of ratings required to reach .80 reliability ranged between 2-6 for engagement, 2-10 for respect, 5-9 for disruption, and 1-7 and 1-3 for social interactions with teachers and peers, respectively. Follow-up analyses revealed reliabilities were not equal across phases for 4 of the 5 targets, with reliability increasing over time.
Conclusions:
The current study holds promise for use of DBR as a brief indicator of student response to a social competence intervention. Specifically, findings suggested only a few ratings were necessary to reliably estimate student behavior within individual intervention phases. Furthermore, reliability was found to increase over time, such that less and less data were needed as student behavior stabilized. Though the limited sample size suggests the reader exercise caution in interpreting findings, these initial results support continued examination of DBR within this particular application.
See more of: Diagnostic, Behavioral & Intellectual Assessment