Coding Joint Engagement Live in School-Based Research: Reliability and Psychometric Considerations

Saturday, May 19, 2012
Sheraton Hall (Sheraton Centre Toronto)
11:00 AM
J. R. Dykstra1, B. Boyd2, L. Watson1, C. McCarty2, G. T. Baranek2 and E. Crais1, (1)Speech and Hearing Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC, (2)Occupational Science, University of North Carolina at Chapel Hill, Chapel Hill, NC
Background: Active engagement has been noted to be a critical component of many interventions for children with autism (NRC, 2001). Therefore, it is important to have tools to measure engagement in a variety of intervention environments. Previous engagement tools used with children with disabilities have focused on on-task behavior or academic engagement (McWilliam et al., 1985; Kishida et al., 2008) or used video coding of joint engagement (Adamson et al., 2009; Kasari et al., 2010). The Advancing Social-communication and Play (ASAP) study needed a tool to measure joint engagement, but methodological restraints required in-vivo coding in school settings. Live coding of behavior requires careful training and monitoring of coders to attain and maintain acceptable reliability. Further, there are multiple ways to estimate reliability for observational measurement of behavior including point-by-point agreement and intraclass correlations (Yoder & Symons, 2010).

Objectives: To describe the joint engagement coding scheme we adapted for live classroom coding, to examine and discuss the process of measuring and attaining reliability, and to examine the psychometric characteristics of the coding system.

Methods: The joint engagement video coding manual (Adamson et al., 1998) was adapted for live, school-based coding of six engagement states: unengaged, onlooking, object only, person only, supported joint engagement, and coordinated joint engagement. In addition, the coding system was adapted to include 2 qualitative ratings, codes for each activity, and recording the numbers of adults and peers present. Coders were trained in the joint engagement coding and practiced coding using a combination of live and video coding. The research team used both percent point-by-point agreement and total proportions for engagement categories to monitor reliability.Once trained, coders conducted three 5-minute observations of joint engagement for 24 preschool students with autism during typical classroom activities. The reliability coder collected data on sixteen (22%) of the 5-minute segments. Data from additional participants in elementary classrooms will be collected in the coming months.

Results: Participants exhibited a range of joint engagement levels, with coordinated joint engagement ranging from 0 to .74. Overall percent agreement for point-by-point agreement ranged from .49 to .98 with a mean of .77. With a 3-second radius window for error, corrected percent agreement ranged from .50 to .99 with a mean of .83. Kappa values, however, ranged from .07 to .80, with a mean of .53. The correlation between corrected percent agreement and kappa estimates was .741 (p=.001). Intraclass correlation coefficients (ICCs) were calculated between primary and reliability coders for the proportion of time spent in each engagement category. The ICCs ranged from .33 to .98, with an average of .83. Psychometric characteristics will be analyzed to examine patterns across participants and activities.

Conclusions: The joint engagement coding produced a wide range of scores for the participants, and was feasible to reliably measure engagement in classroom settings. However, there are challenges in determining what metrics to use to assess reliability, and whether different metrics should be used for training versus actual coding.

| More