Objectives: To describe the joint engagement coding scheme we adapted for live classroom coding, to examine and discuss the process of measuring and attaining reliability, and to examine the psychometric characteristics of the coding system.
Methods: The joint engagement video coding manual (Adamson et al., 1998) was adapted for live, school-based coding of six engagement states: unengaged, onlooking, object only, person only, supported joint engagement, and coordinated joint engagement. In addition, the coding system was adapted to include 2 qualitative ratings, codes for each activity, and recording the numbers of adults and peers present. Coders were trained in the joint engagement coding and practiced coding using a combination of live and video coding. The research team used both percent point-by-point agreement and total proportions for engagement categories to monitor reliability.Once trained, coders conducted three 5-minute observations of joint engagement for 24 preschool students with autism during typical classroom activities. The reliability coder collected data on sixteen (22%) of the 5-minute segments. Data from additional participants in elementary classrooms will be collected in the coming months.
Results: Participants exhibited a range of joint engagement levels, with coordinated joint engagement ranging from 0 to .74. Overall percent agreement for point-by-point agreement ranged from .49 to .98 with a mean of .77. With a 3-second radius window for error, corrected percent agreement ranged from .50 to .99 with a mean of .83. Kappa values, however, ranged from .07 to .80, with a mean of .53. The correlation between corrected percent agreement and kappa estimates was .741 (p=.001). Intraclass correlation coefficients (ICCs) were calculated between primary and reliability coders for the proportion of time spent in each engagement category. The ICCs ranged from .33 to .98, with an average of .83. Psychometric characteristics will be analyzed to examine patterns across participants and activities.
Conclusions: The joint engagement coding produced a wide range of scores for the participants, and was feasible to reliably measure engagement in classroom settings. However, there are challenges in determining what metrics to use to assess reliability, and whether different metrics should be used for training versus actual coding.
See more of: Core Symptoms
See more of: Symptoms, Diagnosis & Phenotype