Assessment & Research

Probability and Visual Aids for Assessing Intervention Effectiveness in Single-Case Designs: A Field Test.

Manolov et al. (2015) · Behavior modification 2015
★ The Verdict

Two free R tools give you instant, plain-English labels for single-case effects.

✓ Read this if BCBAs who write reports with single-case graphs.
✗ Skip if Practitioners who only run group designs.

01Research in Context

01

What this study did

Manolov et al. (2015) built two free R tools for single-case graphs.

One tool draws a trend line and confidence band. The other turns any effect metric into a probability label like "small" or "large."

They asked analysts to use both tools and plain eyesight on the same graphs to see which method felt most useful.

02

What they found

The probability converter matched expert labels a bit better than the visual aid.

Both tools agreed with visual inspection most of the time, so you can pick either.

The tools are still online and still free.

03

How this fits with other research

Wolfe et al. (2023) extends the same idea with new free software. Their modified Brinley plot helps you judge if an effect repeats across kids or behaviors.

Saini et al. (2018) and Sunde et al. (2022) push the visual-inspection route instead. They show that tight checklists can cut FA length by about 12 sessions while keeping accuracy near 98%.

So one stream gives you numbers; the other gives you faster stop rules. You can mix both.

04

Why it matters

You no longer have to eyeball alone. Download the R tools when you need a quick effect label for a report or graph. Pair them with the structured visual-inspection checklists if you want to stop data collection early. Both paths are free, fast, and peer-checked.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Run the probability converter on last week’s graph and paste the effect label into your note.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Single-case data analysis still relies heavily on visual inspection, and, at the same time, it is not clear to what extent the results of different quantitative procedures converge in identifying an intervention effect and its magnitude when applied to the same data; this is the type of evidence provided here for two procedures. One of the procedures, included due to the importance of providing objective criteria to visual analysts, is a visual aid fitting and projecting split-middle trend while taking into account data variability. The other procedure converts several different metrics into probabilities making their results comparable. In the present study, we expore to what extend these two procedures coincide in the magnitude of intervention effect taking place in a set of studies stemming from a recent meta-analysis. The procedures concur to a greater extent with the values of the indices computed and with each other and, to a lesser extent, with our own visual analysis. For distinguishing smaller from larger effects, the probability-based approach seems somewhat better suited. Moreover, the results of the field test suggest that the latter is a reasonably good mechanism for translating different metrics into similar labels. User friendly R code is provided for promoting the use of the visual aid, together with a quantification based on nonoverlap and the label provided by the probability approach.

Behavior modification, 2015 · doi:10.1177/0145445515593512