The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.
Pick the design matrix that matches your SSED type or your meta-analytic effect size will mislead.
01Research in Context
What this study did
Moeyaert et al. (2014) built a how-to guide for turning single-case graphs into numbers you can pool in a meta-analysis.
They wrote out the exact 0-1 columns you need for MBL, reversal, and ATD designs so the regression spits out a clean effect size.
The paper is all code and matrices—no kids, no therapy rooms, just the math.
What they found
The right design matrix keeps the effect size honest; the wrong one hides real change or invents fake change.
They give copy-paste templates so you can plug your data in and get a number journals will accept.
How this fits with other research
Forty years earlier, Périkel et al. (1974) told us to ditch ANOVA and use Bonferroni t tests on linked operant data.
Moeyaert et al. (2014) swap the t test for regression, but the goal is the same: stop the stats from lying to you.
Alsop (2004) and Wirth et al. (2014) also ran computer simulations to warn analysts—Brent about zero-error bias, Oliver about interval sampling error.
Together they form a family of papers that use fake data to show where real data can trip you up.
Why it matters
If you ever plan to meta-analyze your SSED projects, the design matrix is the hidden lever that decides whether your effect looks huge or null.
Use the ready-made columns from Moeyaert et al. (2014) and you skip the guesswork—and the reviewer headaches.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Download the paper, copy the MBL matrix into your R script, and rerun last quarter’s data to check if the effect size changes.
02At a glance
03Original abstract
The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data.
Behavior modification, 2014 · doi:10.1177/0145445514535243