Assessment & Research

Rational skepticism: A scientific review of Witts' (2018) criticisms of the PEAK relational training system

Belisle et al. (2020) · Journal of Applied Behavior Analysis 2020
★ The Verdict

The big critique of PEAK research is largely rejected, so keep the assessment in your toolbox and shore it up with feedback-based coaching.

✓ Read this if BCBAs who give PEAK assessments or train staff to use them.
✗ Skip if Practitioners who do not use PEAK or any relational-training curriculum.

01Research in Context

01

What this study did

Belisle and colleagues read every point Witts raised against PEAK studies. They checked the math, the stats, and the logic. Then they wrote a line-by-line reply.

The paper is a narrative review, not new data. It is a defense brief for the PEAK Relational Training System.

02

What they found

Of the 30 attacks, most were judged wrong or over-blown. The authors say the flaws do not kill the PEAK tool kit.

In plain words: keep using PEAK; the critique does not sink it.

03

How this fits with other research

Yaw et al. (2014) showed that quick feedback after in-service training doubles staff data accuracy. Belisle et al. lean on the same kind of single-case, feedback-rich design that Jared proved works. The two papers echo each other: tight feedback loops are a feature, not a flaw.

Ruiz (1998) complained that college behavior-analysis courses lack solid outcome data. Belisle et al. face a mirror worry—PEAK studies being told their evidence is weak. Both papers push the field to measure better, not throw tools away.

Moss et al. (2009) meta-analysis found that coaching plus in-service beats lectures alone. If future PEAK trainer studies fall inside that meta, they should use the same mix: brief lecture, then on-the-job coaching with praise and correction. Belisle’s defense opens the door for those stronger designs.

04

Why it matters

If you run PEAK lessons, you can breathe easier—the main attacks on its research base have been answered. Keep collecting data, but also add the ingredients that work: live coaching and brief feedback. You will guard both client progress and your own evidence quality.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Run your next PEAK trial as usual, but add two minutes of specific verbal feedback to the staff helper right after the session.

02At a glance

Intervention
not applicable
Design
narrative review
Finding
not reported

03Original abstract

Witts' (2018) review of the peer-reviewed research on the PEAK-Direct Training Module (Dixon, 2014) yielded a divergent conclusion from that of previous reviews (Reed & Luiselli, 2016; Dixon, Belisle, McKeel et al., 2017). Witts advocates for skepticism of this research due to methodological shortcomings, hyperclaiming of results, and inappropriate statistical testing procedures. We identified 30 criticisms in Witts' review, respond to each, and argue that all but 2 (7%) contain untrue assumptions (7, 23%), are not novel (5, 17%), are logically invalid (7, 23%), or are more appropriately framed as criticisms of applied behavior analytic research more generally (9, 30%). The two criticisms that support Witts' purpose in writing his review are minor and not fatal. We discuss all of Witts' criticisms both specifically and broadly to illustrate that most of his suggestions about applied behavior analytic research may actually serve to hinder progress in a discipline moving toward larger-scale research.

Journal of Applied Behavior Analysis, 2020 · doi:10.1002/jaba.654