Service Delivery

Autism, Insurance, and Discrimination: The Effect of an Autism Diagnosis on Behavior-Analytic Services

Trump et al. (2020) · Behavior Analysis in Practice 2020
★ The Verdict

Use interrupted time-series on your clinic’s own data to see if program changes really shift client trends—no control group needed.

✓ Read this if BCBAs who run clinics or supervise billing and want a cheap, strong way to prove program impact.
✗ Skip if RBTs looking for direct-intervention tricks; this is an analytics guide, not a teaching protocol.

01Research in Context

01

What this study did

Trump et al. (2020) show how to run an interrupted time-series analysis on everyday clinic data. You mark the week you started a new policy, then track client metrics before and after that line.

The paper uses charts from an outpatient severe-behavior clinic. No extra control group is needed; your own baseline becomes the comparison.

02

What they found

The authors do not report new client outcomes. Instead they give a step-by-step recipe you can copy to judge your own program changes.

Graphs in the paper show how trends, not just single scores, reveal whether things really improved after the change point.

03

How this fits with other research

Garikipati et al. (2024), da Silva et al. (2023), and Sappok et al. (2024) all looked back at clinic charts and saw positive gains. Each study used simple pre-post stats; Trump et al. give you a stronger lens—interrupted time-series—to test the same kind of data.

Ostrovsky et al. (2022) ran a quasi-experiment and also found gains, but could not link progress to hours. Plugging Trump’s method into their weekly Vineland points might show exactly when improvement sped up or flattened.

Papatola et al. (2016) and Kornack et al. (2017) teach you how to win insurance approval. Trump et al. complete the circle: once services are funded, you can use interrupted time-series to prove the hours you fought for are actually working.

04

Why it matters

You already collect daily data. With this free tool you can turn those numbers into a publishable graph that payers, parents, and administrators trust. No extra subjects, no wait-list ethics headaches—just honest feedback on whether your policy tweak helped or flopped.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one clinic metric, pick one policy start date, graph 8 weeks before and 8 weeks after—see if the slope changes.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Program evaluation is an essential practice for providers of behavior analytic services, as it helps providers understand the extent to which they are achieving their intended mission to the community they serve. A proposed method for conducting such evaluations, is through the use of a consecutive case series design, for which cases are sequentially gathered following the onset of a specific occurrence. Given the sequential nature in which data are collected within a consecutive case series, analytic techniques that adopt a time-series framework may be particularly advantageous. Although such methods are commonly used for program evaluation in medicine and economics, their application within the field of applied behavior analysis is largely absent. To serve as a model for providers undertaking evaluation efforts, I conducted a program evaluation of an outpatient severe behavior clinic, in which I employed quasi-experimental methods using an interrupted time-series analysis.

Behavior Analysis in Practice, 2020 · doi:10.1007/s40617-018-00327-0