Simons Simplex Collection: A Model of Quality Assessment in Multi-Site Phenotyping Research

Saturday, May 19, 2012
Sheraton Hall (Sheraton Centre Toronto)
10:00 AM
E. Brooks1,2, J. E. Olson2,3, L. Green-Synder2,3, S. Risi2, J. Tjernagel1,2, L. C. White1,2, R. K. Rumsey4, A. Gallego1 and M. Greenup1, (1)Simons Foundation, New York, NY, (2)University of Michigan Autism & Communication Disorders Center, Ann Arbor, MI, (3)Children's Hospital Boston/Harvard Medical Center, Boston, MA, (4)University of Minnesota, Minneapolis, MN, United States
Background: Quality assurance measures, retrospective chart reviews and meta-analyses have been a mainstay in pharmacological research for years yet emerge only sporadically within autism literature. Significant progress has been made establishing multi-site surveillance networks (e.g., ADDM network and CADDRE by the CDC), international collaborations including the Autism-Genome (AGP) and CIHR Pathways Projects, and other high-quality partnerships including Autism Genetic Resource Exchange (AGRE), and Autism Treatment Network. Improved alliances exploring emerging research initiatives are no longer the exception but the norm.  This progress presents new challenges to methodological design including systematic and rigorous application of consistent criteria and standardized assessment measures across sites.  By requiring rigorous phenotyping of its permanent genetic sample repository the Simons Simplex Collection and Simons Foundation Autism Research Initiative (SFARI) have revolutionized the standards of data quality assurance within autism research. 

Objectives: To characterize the findings of a random review of charts as part of an ongoing data quality assurance program by the Simons Foundation of SSC data.

Methods: After completion of data collection for the SSC, a repository of genetic, phenotypic and biological data from 2663 ‘simplex’ families collected across 12 sites, chart reviews were conducted on 10% of each site’s total collection.  Review teams were comprised of at least one clinician and one administrative staff, all credentialed by the SSC PI.  Teams visited each site and reviewed charts manually for completeness, appropriate documents and authorization (i.e., consent forms), adherence to inclusion/exclusion criteria, scoring, confirmed reliability of diagnosticians and documentation of any validation issues.  Cases were computer generated at random to reflect a distribution across each quarter the site participated, representing 10% of the site’s total contribution.  Sites were notified in advance of the team’s scheduled arrival dates but were not informed of the case IDs that would be reviewed*. Checklists were completed for each chart, and then used to create statistical reports for each site, each which was then shared with the local Principal Investigators.

Results: A total of 277 charts (10.4% of entire SSC collection) were reviewed at 12 SSC sites over 4-months.  Of these 277 cases, 228 (79%) were ‘complete’, having all primary required measures and 254 (92%) included appropriately completed authorization (consent forms).  Twenty-three (8%) charts reviewed were such that based on inadequate examiner protocol notes the clinician could not confidently assert that the correct ADOS module had been administered.   A total of 48 charts (17%) were flagged for clinical follow-up to ensure proper adherence to inclusion/exclusion criteria, yet 99% of those reviewed reflected that someone maintaining SSC standards for research reliability on the ADOS and ADI had in fact conducted the measure. 

Conclusions: These results reveal a program that effectively provided remote support to multiple sites as well as identification of possible protocol improvements.  Specifically, the data speak to the strength of the research reliability maintenance plan put into place by SSC, while at the same time offer insight into the complexity of this study design and the rigors of our sample selection.

| More