Assessment & Research

Development of the evaluative method for evaluating and determining evidence-based practices in autism.

Reichow et al. (2008) · Journal of autism and developmental disorders 2008
★ The Verdict

Use this rating tool to screen autism intervention studies in minutes.

✓ Read this if BCBAs who pick interventions and train staff.
✗ Skip if RBTs who only run already-written programs.

01Research in Context

01

What this study did

Reichow et al. (2008) built a new rating tool. The tool scores how strong the evidence is for any autism intervention.

It gives clear rules. You plug in study details. The tool tells you if the evidence is weak, so-so, or strong.

02

What they found

The paper does not test kids. It tests the tool itself. The authors show, step by step, how to use it.

03

How this fits with other research

Lord et al. (2005) came first. That team said, "We need tougher standards." Brian’s tool answers that call.

Robinson et al. (2011) used the tool and added a warning. They said, "Adult therapy rules don’t fit kids with autism." The two papers work as a pair: one gives the ruler, the other tells you where not to use it.

LaPoint et al. (2025) push the idea further. They now want every autism study to register its plan before it starts. Brian’s 2008 tool is still useful, but you should add LaPoint’s new check-boxes for 2025-level quality.

04

Why it matters

You now have a quick score sheet instead of guessing if a study is good. Before you try a new intervention, run the article through Brian’s steps. If the score is low, keep looking. Your clients deserve practices that pass a clear bar.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Print the Brian checklist and score the next autism intervention article you planned to share with parents.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Although research in autism has grown more sophisticated, the gap between research knowledge and applicability of research in real world settings has grown. There have been a number of different reviews of evidence-based practices of treatments for young children with autism. Reviews which have critically evaluated the empirical evidence have not found any treatments that can be considered evidence-based. Reasons for this shortcoming are explored, and a new method for the evaluation of empirical evidence is provided. Future uses of this evaluative method are provided as well as a discussion of how this tool might aid in narrowing the research to practice gap.

Journal of autism and developmental disorders, 2008 · doi:10.1007/s10803-007-0517-7