Violations of Assumptions in School-Based Single-Case Data: Implications for the Selection and Interpretation of Effect Sizes.
Always screen single-case data for autocorrelation and trend before quoting an effect size.
01Research in Context
What this study did
Solomon (2014) looked at real school data from single-case studies. He checked if the numbers followed the rules most effect-size formulas need.
The study hunted for two rule-breakers: autocorrelation (scores that drift together over time) and trend (a steady climb or fall).
What they found
Lots of school data sets broke both rules. When assumptions break, effect sizes can lie.
In plain words, your shiny number may say “huge win” when the change is really small, or miss a true win entirely.
How this fits with other research
Malone (1999) already told us to welcome statistics, not fear them. George shows the next step: test the numbers before you trust them.
Kyonka et al. (2019) saw more stats creeping into JEAB papers, but George warns that using the stats wrongly is worse than not using them.
McCabe et al. (2025) found another shaky assumption—interactive effects in synthesized FA—proving rule-checking matters across all kinds of behavior data.
Why it matters
Before you drop an effect size into a report, plot your data. Run a quick autocorrelation check. If you see drift or slope, pick a tool built for messy data or note the limit. Your readers, and future meta-analyses, will thank you.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Open your last single-case graph in Excel, run =CORREL() between time and outcome—if r > .20, flag it before calculating any effect size.
02At a glance
03Original abstract
A wide variety of effect sizes (ESs) has been used in the single-case design literature. Several researchers have "stress tested" these ESs by subjecting them to various degrees of problem data (e.g., autocorrelation, slope), resulting in the conditions by which different ESs can be considered valid. However, on the back end, few researchers have considered how prevalent and severe these problems are in extant data and as a result, how concerned applied researchers should be. The current study extracted and aggregated indicators of violations of normality and independence across four domains of educational study. Significant violations were found in total and across fields, including low levels of autocorrelation and moderate levels of absolute trend. These violations affect the selection and interpretation of ESs at the individual study level and for meta-analysis. Implications and recommendations are discussed.
Behavior modification, 2014 · doi:10.1177/0145445513510931