From Bayes through marginal utility to effect sizes: a guide to understanding the clinical and statistical significance of the results of autism research findings.
Effect sizes tell you if an autism intervention matters in real life—demand them in every paper you read.
01Research in Context
What this study did
Ekas et al. (2011) wrote a how-to guide. They show how to turn autism study numbers into real-world meaning.
The paper walks readers from p-values to effect sizes to clinical payoff. No new data—just clear rules.
What they found
The authors found that most autism papers stop at “significant.” They say that is not enough.
You need effect size, cost, and client value to judge if an intervention is worth using.
How this fits with other research
Tromans et al. (2018) looked at 529 autism trials and saw most are tiny. Ekas et al. (2011) already warned that tiny trials can flunk effect-size tests—same worry, different angle.
Provenzani et al. (2020) counted 327 different outcome tools in 406 trials. Ekas et al. (2011) say this chaos hides true effect sizes; the two papers together push for common metrics.
Stewart et al. (2018) meta-analysis shows parent-training gives small but real gains. Ekas et al. (2011) give the ruler that turns those small numbers into “yes, clients will notice” decisions.
Why it matters
Next time you read an autism study, skip the p-value paragraph. Jump straight to the effect-size table. If authors did not report it, email them. Demand Cohen’s d or percentage of non-overlapping data. Tell supervisors, “Small effect with big cost equals don’t adopt.” Make effect size your gatekeeper for new programs.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Open your last five journal articles and circle the effect-size number—if you can’t find it, flag the study as weak evidence.
02At a glance
03Original abstract
The objectives of this report are: (a) to trace the theoretical roots of the concept clinical significance that derives from Bayesian thinking, Marginal Utility/Diminishing Returns in Economics, and the "just noticeable difference", in Psychophysics. These concepts then translated into: Effect Size (ES), strength of agreement, clinical significance, and related concepts, and made possible the development of Power Analysis; (b) to differentiate clinical significance from statistical significance; and (c) to demonstrate the utility of measures of ES and related concepts for enhancing the meaning of Autism research findings. These objectives are accomplished by applying criteria for estimating clinical significance, and related concepts, to a number of areas of autism research.
Journal of autism and developmental disorders, 2011 · doi:10.1007/s10803-010-1035-6