Publication bias in studies of an applied behavior-analytic intervention: an initial analysis.
Published PRT studies overstate effectiveness by 22 PND points—grab dissertations before you claim strong evidence.
01Research in Context
What this study did
The team looked at every PRT single-case study they could find. They compared published journal articles with unpublished dissertations.
They used PND scores to measure how well PRT worked in each study. PND is the percent of non-overlapping data points between baseline and treatment.
What they found
Published PRT studies scored 22 PND points higher than dissertation studies. This means the published papers make PRT look more effective than it really is.
The gap is big enough to change how strong the evidence looks.
How this fits with other research
Davison et al. (1995) is one of the upbeat published studies the target paper warns about. Its glowing PRT results likely sit in that inflated 22-point zone.
Tincani et al. (2016) meta-analysis of pacing studies faces the same problem. Their positive effect sizes may also be too high because dissertations were left out.
Back in 1988, M et al. already showed that developmental-disabilities journals hide key details. They found 97% of articles failed to report medication use. Sham et al. (2014) extend that line: now we know publication status itself is another hidden variable.
Why it matters
When you write an evidence review, always hunt for dissertations and theses. Counting only published studies will make any intervention look about 22 PND points better than it is. Share this number with supervisors and IEP teams so they get a realistic picture of PRT strength.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add one dissertation database (ProQuest) to your next PRT evidence search and compare PND scores with the journal articles you already have.
02At a glance
03Original abstract
Publication bias arises when studies with favorable results are more likely to be reported than are studies with null findings. If this bias occurs in studies with single-subject experimental designs(SSEDs) on applied behavior-analytic (ABA) interventions, it could lead to exaggerated estimates of intervention effects. Therefore, we conducted an initial test of bias by comparing effect sizes, measured by percentage of nonoverlapping data (PND), in published SSED studies (n=21) and unpublished dissertations (n=10) on 1 well-established intervention for children with autism, pivotal response treatment (PRT). Although published and unpublished studies had similar methodologies, the mean PND in published studies was 22% higher than in unpublished studies, 95% confidence interval (4%, 38%). Even when unpublished studies are included, PRT appeared to be effective (PNDM=62%). Nevertheless, the disparity between published and unpublished studies suggests a need for further assessment of publication bias in the ABA literature.
Journal of applied behavior analysis, 2014 · doi:10.1002/jaba.146