Assessment & Research

A Gentle Introduction to Bayesian Posterior Predictive Checking for Single-Case Researchers

Grekov et al. (2026) · Journal of Behavioral Education 2026
★ The Verdict

A five-minute graph can save you from using the wrong statistical model in single-case work.

✓ Read this if BCBAs who run or review single-case studies and use any stats beyond visual analysis.
✗ Skip if Practitioners who only use visual inspection and never plan to touch Bayesian methods.

01Research in Context

01

What this study did

Grekov et al. (2026) wrote a how-to guide for single-case researchers. They show you how to use Bayesian posterior predictive checking. This means you make fake data from your model and compare it to your real data.

The authors walk through two real data sets step by step. They use simple graphs, not scary math. The whole process runs in free software.

02

What they found

The tutorial shows that a quick picture can tell you if your model fits. When the fake data look like the real data, you keep the model. When they do not, you fix the model.

The authors found this check catches mistakes that numbers alone miss. It works even with small data sets common in single-case work.

03

How this fits with other research

Dowdy et al. (2025) extend the same idea. They add real-time pooling across labs. Their method uses earlier cases to sharpen later ones, building on Grekov’s check-first mindset.

Manolov et al. (2022) give a different visual tool. They offer modified Brinley plots to judge replication. Grekov’s PPC graphs and Manolov’s plots both aim to make visual inference easier, but they solve different problems.

Tyrer et al. (2006) warned that visual and statistical views can clash. Grekov’s method bridges the gap by letting you see model fit visually before trusting the numbers.

04

Why it matters

You can run this check in under five minutes after you fit any Bayesian model. If the fake data bands do not line up with your real line, you know the model needs work before you write the report. It is a fast safety net that costs nothing and saves you from publishing a bad fit.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

After you fit your next single-case model, simulate one fake data set and overlay it on your real data; if the bands miss the points, tweak the model before writing.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Abstract Although researchers typically rely on visual analysis to draw conclusions about functional relations in single-case designs, a growing collection of statistical methods have been proposed to augment visual assessment. With the introduction of increasingly complex statistical models, single-case researchers need stategies for evaluating their plausibility and utility. One potentially useful method for such model assessment is Bayesian posterior predictive checking (PPC), which involves simulating artificial data based on an estimated model and comparing the features of the simulated data to features of actual data. We provide a non-technical introduction to the use of PPCs for assessing the plausibility of statistical models for SCD data. We propose that PPCs should focus on data features that are of central interest in visual analysis. We demonstrate how PPCs can be represented in graphical form. We illustrate use of PPCs by re-analyzing data from two previously conducted studies: an across-participant multiple-baseline design assessing an oral reading fluency intervention and a reversal design investigating the effects of a group contingency intervention on inappropriate verbalizations.

Journal of Behavioral Education, 2026 · doi:10.1007/s10864-025-09613-8