Synthesizing the Multiple-Probe Experimental Design With the PEAK Relational Training System in Applied Settings
You can track PEAK lessons with built-in staggered probes and still finish on time.
01Research in Context
What this study did
Belisle et al. (2021) tested a new way to track PEAK lessons. They wove a staggered baseline probe design into one child's regular PEAK sessions.
The child stayed in the clinic room. No extra time was added. Probes just popped up before teaching started on new modules.
What they found
The embedded probes gave clear yes-or-no answers about skill change. The team could see learning without running separate baseline days.
Results looked smooth and believable. The method fit real-life clinical pacing.
How this fits with other research
Paliliunas et al. (2022) used PEAK with three autistic kids and got strong skill jumps. Belisle's method shows how you can prove those jumps inside everyday sessions.
Wing (1981) found that early probes speed up prompt fading. Belisle moves that idea to PEAK: start probes right away, not after long baselines.
Ma (2006) offered PEM stats for single-case reviews. Belisle gives a fresh probe layout that could feed those same stats later.
Why it matters
You can run PEAK and collect solid evidence without extra chairs, time, or forms. Slip in quick probes before each new module. One clear graph will show parents and funders that your teaching works.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Before starting the next PEAK module, run a quick probe on the last three targets.
02At a glance
03Original abstract
The scientist-practitioner model necessitates embedding experimental designs within applied practice. This technical report describes a procedure for embedding a multiple-probe experimental design within the PEAK Relational Training System across all four PEAK modules. Baseline probes provide a direct test of target skills negatively endorsed within the PEAK assessment battery and can provide an estimate of skill acquisition in the absence of direct training. Temporal staggering of the probes maintains the fidelity of the experimental design and allows for the design to evolve along with learner skill acquisition. Achievement of mastery criteria demonstrates the efficacy of programming, and failure to achieve mastery can be remedied through programming adjustments that can be captured within the design. We additionally conducted a field test of the design with a child with disabilities, supporting the viability of this procedure within applied settings.
Behavior Analysis in Practice, 2021 · doi:10.1007/s40617-020-00520-0