Assessment & Research

From Bayes through marginal utility to effect sizes: a guide to understanding the clinical and statistical significance of the results of autism research findings.

Cicchetti et al. (2011) · Journal of autism and developmental disorders 2011
★ The Verdict

Effect sizes tell you if an autism intervention matters in real life—demand them in every paper you read.

✓ Read this if BCBAs who pick interventions or train staff
✗ Skip if RBTs who only run already-approved plans

01Research in Context

01

What this study did

Ekas et al. (2011) wrote a how-to guide. They show how to turn autism study numbers into real-world meaning.

The paper walks readers from p-values to effect sizes to clinical payoff. No new data—just clear rules.

02

What they found

The authors found that most autism papers stop at “significant.” They say that is not enough.

You need effect size, cost, and client value to judge if an intervention is worth using.

03

How this fits with other research

Tromans et al. (2018) looked at 529 autism trials and saw most are tiny. Ekas et al. (2011) already warned that tiny trials can flunk effect-size tests—same worry, different angle.

Provenzani et al. (2020) counted 327 different outcome tools in 406 trials. Ekas et al. (2011) say this chaos hides true effect sizes; the two papers together push for common metrics.

Stewart et al. (2018) meta-analysis shows parent-training gives small but real gains. Ekas et al. (2011) give the ruler that turns those small numbers into “yes, clients will notice” decisions.

04

Why it matters

Next time you read an autism study, skip the p-value paragraph. Jump straight to the effect-size table. If authors did not report it, email them. Demand Cohen’s d or percentage of non-overlapping data. Tell supervisors, “Small effect with big cost equals don’t adopt.” Make effect size your gatekeeper for new programs.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Open your last five journal articles and circle the effect-size number—if you can’t find it, flag the study as weak evidence.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

The objectives of this report are: (a) to trace the theoretical roots of the concept clinical significance that derives from Bayesian thinking, Marginal Utility/Diminishing Returns in Economics, and the "just noticeable difference", in Psychophysics. These concepts then translated into: Effect Size (ES), strength of agreement, clinical significance, and related concepts, and made possible the development of Power Analysis; (b) to differentiate clinical significance from statistical significance; and (c) to demonstrate the utility of measures of ES and related concepts for enhancing the meaning of Autism research findings. These objectives are accomplished by applying criteria for estimating clinical significance, and related concepts, to a number of areas of autism research.

Journal of autism and developmental disorders, 2011 · doi:10.1007/s10803-010-1035-6