Assessment & Research

Publication bias in studies of an applied behavior-analytic intervention: an initial analysis.

Sham et al. (2014) · Journal of applied behavior analysis 2014
★ The Verdict

Published PRT studies overstate effectiveness by 22 PND points—grab dissertations before you claim strong evidence.

✓ Read this if BCBAs who write literature reviews, make evidence-based practice decisions, or train staff on PRT.
✗ Skip if Clinicians who only read journal articles and never check theses or dissertations.

01Research in Context

01

What this study did

The team looked at every PRT single-case study they could find. They compared published journal articles with unpublished dissertations.

They used PND scores to measure how well PRT worked in each study. PND is the percent of non-overlapping data points between baseline and treatment.

02

What they found

Published PRT studies scored 22 PND points higher than dissertation studies. This means the published papers make PRT look more effective than it really is.

The gap is big enough to change how strong the evidence looks.

03

How this fits with other research

Davison et al. (1995) is one of the upbeat published studies the target paper warns about. Its glowing PRT results likely sit in that inflated 22-point zone.

Tincani et al. (2016) meta-analysis of pacing studies faces the same problem. Their positive effect sizes may also be too high because dissertations were left out.

Back in 1988, M et al. already showed that developmental-disabilities journals hide key details. They found 97% of articles failed to report medication use. Sham et al. (2014) extend that line: now we know publication status itself is another hidden variable.

04

Why it matters

When you write an evidence review, always hunt for dissertations and theses. Counting only published studies will make any intervention look about 22 PND points better than it is. Share this number with supervisors and IEP teams so they get a realistic picture of PRT strength.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Add one dissertation database (ProQuest) to your next PRT evidence search and compare PND scores with the journal articles you already have.

02At a glance

Intervention
not applicable
Design
other
Population
autism spectrum disorder
Finding
positive
Magnitude
medium

03Original abstract

Publication bias arises when studies with favorable results are more likely to be reported than are studies with null findings. If this bias occurs in studies with single-subject experimental designs(SSEDs) on applied behavior-analytic (ABA) interventions, it could lead to exaggerated estimates of intervention effects. Therefore, we conducted an initial test of bias by comparing effect sizes, measured by percentage of nonoverlapping data (PND), in published SSED studies (n=21) and unpublished dissertations (n=10) on 1 well-established intervention for children with autism, pivotal response treatment (PRT). Although published and unpublished studies had similar methodologies, the mean PND in published studies was 22% higher than in unpublished studies, 95% confidence interval (4%, 38%). Even when unpublished studies are included, PRT appeared to be effective (PNDM=62%). Nevertheless, the disparity between published and unpublished studies suggests a need for further assessment of publication bias in the ABA literature.

Journal of applied behavior analysis, 2014 · doi:10.1002/jaba.146