Multilevel Model Selection Applied to Single-Case Experimental Design Data
Write your multilevel model recipe before you open the data file.
01Research in Context
What this study did
Manolov et al. (2025) wrote a how-to guide for picking multilevel models in single-case work.
They list which model parts you should decide on before you see the data.
Parts include autocorrelation, curved time trends, and moderator terms.
What they found
The paper gives a checklist, not new numbers.
It tells analysts to write down each term and say why it belongs.
This keeps you from data-dredging and boosts reviewer trust.
How this fits with other research
Carlin et al. (2022) also ran simulations on SCED tools.
They pushed Tau and RD/g for yes/no and size answers.
Manolov adds: pick your model family first, then pick the effect ruler.
Lanovaz et al. (2020) tried machine-learning to beat visual calls.
Manolov does not oppose AI; they just want the model pre-registered first.
Dowdy et al. (2021) reviewed meta-analysis choices.
Manolov’s rules make the data you feed into any meta-analysis cleaner.
Why it matters
Next time you run a single-case study, open a blank doc.
List every model term you plan to test.
Add one sentence of theory for each.
Paste this plan into your IRB and your manuscript.
Reviewers will smile, your stats will be transparent, and future meta-analysts will thank you.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Draft a one-page analysis plan that lists each model term and your reason for it.
02At a glance
03Original abstract
Abstract Multilevel modeling is a promising approach that can be applied to evaluate intervention effectiveness and explain variability in intervention effectiveness in single-case experimental design research. This approach is recommended as it accounts for the nested data structure and allows to model complexities such as autocorrelation, nonlinear time trends and the inclusion of participant characteristics as moderators to account for variability in intervention effectiveness. Therefore, choices need to be made to select the most appropriate multilevel model given the research question and the need to model (some or all) complexities. This brief commentary provides criteria that can be used to inform appropriate model selection and ends with a recommendation for best practices for model building. Our hope is to further enhance the understanding of the appropriateness and applicability of the multilevel modeling approach to analyze single-case experimental data.
Journal of Behavioral Education, 2025 · doi:10.1007/s10864-025-09593-9