Practitioner Development

On science and the discriminative law of effect.

Davison et al. (2005) · Journal of the experimental analysis of behavior 2005
★ The Verdict

Science rewards flashy results and buries replications—start rewarding the repeats.

✓ Read this if BCBAs who publish or supervise single-case research.
✗ Skip if Practitioners who only read journals, never write for them.

01Research in Context

01

What this study did

Michael and colleagues wrote a think-piece. They asked why behavior analysts rarely repeat each other's work.

They mapped the pay-offs. Journals reward new, flashy results. They ignore boring replications. That skews the record.

02

What they found

The field chases novelty like a pigeon pecking for grain. Replication gets no pellets.

The authors warned that our science looks strong only because the failures stay in the drawer.

03

How this fits with other research

Manolov et al. (2022) gave us a free plot tool. Now you can see if a single-case effect repeats across kids.

Jacobs (2019) adds randomization tests. These give clean p-values for tiny N designs without weird math assumptions.

Horner et al. (2022) tighten the rules. Better graphs and clearer reports make future replications easier.

Together the later papers answer Michael’s call. They hand practitioners the gear to value and check replication.

04

Why it matters

Next time you run an AB design, graph it, then test it with the free Manolov tool. If the effect holds, write it up even if it’s ‘just’ a repeat. Each small replication is one more pellet for honest science.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one past client, re-graph the data with the free Manolov tool, and note whether the effect replicates across phases.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

This article considers the process of the dissemination of scientific findings from the point of view of the discriminative law of effect. We assume that the purpose of science is to describe the state of the world in an unbiased and accurate manner. We then consider a number of challenges to the unbiased consensual development of science that arise from differences between science that is done, submitted for publication, and published. These challenges arise from the differential reinforcers for both research and publication delivered by journals and editors for novel results, the undervaluation of systematic replication and findings of invariance, and general lack of reinforcers for failed replications. All these challenges bias science toward searching for, reporting, and valuing novel results and consequently lead to a biased and erroneous view of the world. We suggest that science should be approached more conservatively, and that a reevaluation of the value of replication, and especially failed replication, is in order.

Journal of the experimental analysis of behavior, 2005 · doi:10.1901/jeab.2005.27-04