Evaluating evidence-based practice in light of the boundedness and proximity of outcomes: Capturing the scope of change.
Stop calling an intervention 'effective' until the data show the skill works outside the teaching setting.
01Research in Context
What this study did
Sandbank et al. (2021) wrote a think-piece for autism researchers. They warn that most studies only track skills that were directly taught. The paper gives no new data. It tells reviewers to look for change that spreads beyond the training room.
The authors coin two quick checks: Is the outcome 'bounded' to the exact task? Is it 'proximal' to the teaching setting? If yes, the evidence is weak.
What they found
The team found that current evidence-based practice (EBP) rules reward narrow scores. A child may master a picture card in the clinic, yet show no gain at recess. That card score still counts as 'success'. The paper says this keeps autism services stuck in small boxes.
How this fits with other research
Castro et al. (2019) backs the warning. They read 1,500 English school plans and rated most goals 'non-functional'. The plans echo Micheal's concern: teams write goals they can measure, not goals that matter.
Hume et al. (2018) give a fix. Their survey shows parents, teachers, and teens each see different skill gaps. Using all three views widens the target, matching Micheal's call for broader outcomes.
Bottema-Beutel et al. (2021) adds a twist. They found 84% of ABA autism studies hide clinical conflicts of interest. Micheal asks for cleaner outcomes; Bottema-Beutel shows the same literature hides messy motives. Together, the papers push readers to doubt both the data and the reason it was collected.
Why it matters
Next time you pick an 'evidence-based' program, pause. Ask: Did the study show lunch-room use of the skill? Did parents see change at home? If the only win is a perfect score during table-top trials, keep looking. Demand goals that travel. Train for recess, not just the clinic corner.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Open your last three goal sheets. Circle any goal that can only be measured at the teaching table. Rewrite at least one so it can be tracked in a new place, with new people.
02At a glance
03Original abstract
Evidence-based practice (EBP) reviews abound in early childhood autism intervention research. These reviews seek to describe and evaluate the evidence supporting the use of specific educational and clinical practices, but give little attention to evaluating intervention outcomes in terms of the extent to which they reflect change that extends beyond the exact targets and contexts of intervention. We urge consideration of these outcome characteristics, which we refer to as "proximity" and "boundedness," as key criteria in evaluating and describing the scope of change effected by EBPs, and provide an overview and illustration of these concepts as they relate to early childhood autism intervention research. We hope this guidance will assist future researchers in selecting and evaluating intervention outcomes, as well as in making important summative determinations of the evidence base for this population. LAY SUMMARY: Recent reviews have come to somewhat different conclusions regarding the evidence base for interventions geared toward autistic children, perhaps because such reviews vary in the degree to which they consider the types of outcome measures used in past studies testing the effects of treatments. Here, we provide guidance regarding characteristics of outcome measures that research suggests are particularly important to consider when evaluating the extent to which an intervention constitutes "evidence-based practice."
Autism research : official journal of the International Society for Autism Research, 2021 · doi:10.1002/14651858.CD009774.pub2