Assessment & Research

A survey of residual analysis and a new test of residual trend

McDowell et al. (2016) · Journal of the Experimental Analysis of Behavior 2016
★ The Verdict

Run the new cubic-polynomial sign-test to be sure the leftover points around your trend line are random, not hiding a second trend.

✓ Read this if BCBAs who publish or review single-case graphs.
✗ Skip if Practitioners who only use group designs.

01Research in Context

01

What this study did

McDowell et al. (2016) built a brand-new test for single-case graphs. The test checks if the leftover points around your trend line are truly random.

They call it the cubic-polynomial sign-test. You run it after you fit any model to your data.

02

What they found

The test catches sneaky leftover trends that eyes and old rules often miss.

If the test is significant, your model is not done—you need more predictors or a different curve.

03

How this fits with other research

Older papers like Dodd (1984) already warned that classic stats can mislead. McDowell gives you a concrete next step to fix the problem.

Manolov et al. (2022) also offer new single-case tools, but they focus on how consistent your data look across phases. McDowell focuses on what is left after the model, not the raw pattern.

DeHart et al. (2019) show mixed-effects modeling as another path. You can use both: mixed-effects for the main fit, McDowell’s test to double-check the leftovers.

04

Why it matters

Next time you graph a client’s behavior, run the cubic-polynomial sign-test before you call the intervention a success. It takes five minutes in Excel and shields you from false positives. Clean residuals mean you can trust your visual sweep and defend the data in treatment reviews.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

After you fit any trend line, count the signs of the residuals and plug them into the cubic test spreadsheet—if it’s significant, tweak the model before you show the graph.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

A survey of residual analysis in behavior-analytic research reveals that existing methods are problematic in one way or another. A new test for residual trends is proposed that avoids the problematic features of the existing methods. It entails fitting cubic polynomials to sets of residuals and comparing their effect sizes to those that would be expected if the sets of residuals were random. To this end, sampling distributions of effect sizes for fits of a cubic polynomial to random data were obtained by generating sets of random standardized residuals of various sizes, n. A cubic polynomial was then fitted to each set of residuals and its effect size was calculated. This yielded a sampling distribution of effect sizes for each n. To test for a residual trend in experimental data, the median effect size of cubic-polynomial fits to sets of experimental residuals can be compared to the median of the corresponding sampling distribution of effect sizes for random residuals using a sign test. An example from the literature, which entailed comparing mathematical and computational models of continuous choice, is used to illustrate the utility of the test.

Journal of the Experimental Analysis of Behavior, 2016 · doi:10.1002/jeab.208