Assessment & Research

Applying mixed‐effects modeling to single‐subject designs: An introduction

DeHart et al. (2019) · Journal of the Experimental Analysis of Behavior 2019
★ The Verdict

Mixed-effects modeling gives you legit inferential stats without squashing individual data.

✓ Read this if BCBAs who write single-subject reports for schools, journals, or insurance.
✗ Skip if Practitioners who only use visual analysis and never need p-values.

01Research in Context

01

What this study did

DeHart et al. (2019) wrote a how-to paper. They show BCBAs a new way to crunch single-subject data.

The method is called mixed-effects modeling. It keeps each client’s data separate. No averaging across kids.

The authors walk through the steps and give a sample data set. You can follow along in Excel or R.

02

What they found

The model lines up with what you see on the graph. If the line jumps up, the stats say the same.

You also get a p-value and a confidence interval. That lets you talk about certainty with parents or funders.

03

How this fits with other research

Older papers said, “Stick to visual only.” Michael (1974) and Iversen (2021) warn that stats hide individual change. DeHart answers them by keeping every client in the model.

Early ANOVA fans like Christophersen et al. (1972) had to mash data together. Lydersen et al. (1974) showed that breaks the rules. Mixed-effects fixes the same problem without the mash.

Bayesian fans reach the same goal. Barnard-Brak et al. (2020) use Bayes; DeHart uses mixed-effects. Both let you pool studies yet keep each person’s story clear.

04

Why it matters

You can now back up your visual call with a clean stat line. Write “mixed-effects modeling” in your report and reviewers smile. Try it on your next reversal graph — it takes one extra column in R and you keep every data point you love.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Run your last ABAB graph through the free mixed-effects code in the paper and compare the output to your visual call.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Behavior analysis and statistical inference have shared a conflicted relationship for over fifty years. However, a significant portion of this conflict is directed toward statistical tests (e.g., t-tests, ANOVA) that aggregate group and/or temporal variability into means and standard deviations and as a result remove much of the data important to behavior analysts. Mixed-effects modeling, a more recently developed statistical test, addresses many of the limitations of more basic tests by incorporating random effects. Random effects quantify individual subject variability without eliminating it from the model, hence producing a model that can predict both group and individual behavior. We present the results of a generalized linear mixed-effects model applied to single-subject data taken from Ackerlund Brandt, Dozier, Juanico, Laudont, & Mick, 2015, in which children chose from one of three reinforcers for completing a task. Results of the mixed-effects modeling are consistent with visual analyses and importantly provide a statistical framework to predict individual behavior without requiring aggregation. We conclude by discussing the implications of these results and provide recommendations for further integration of mixed-effects models in the analyses of single-subject designs.

Journal of the Experimental Analysis of Behavior, 2019 · doi:10.1002/jeab.507