Assessment & Research

Assessing functional relations in single-case designs: quantitative proposals in the context of the evidence-based movement.

Manolov et al. (2014) · Behavior modification 2014
★ The Verdict

Drop Rumen’s split-middle trend and bounce count onto your next single-case graph to back up your visual call.

✓ Read this if BCBAs writing single-case studies for publication or grant review.
✗ Skip if Clinicians who only read graphs, never write them.

01Research in Context

01

What this study did

Manolov et al. (2014) wrote a how-to paper for single-case researchers. They asked, "How can we add numbers to our graphs so journals trust our visual calls?"

The team tested two quick tools on pretend AB data. Tool one: split-middle trend line. Tool two: a simple count of bounce around that line. They wanted rules any BCBA could run in Excel.

02

What they found

The combo of trend plus bounce gave clearer yes/no answers than eyeballing alone. The authors show step-by-step formulas so readers can copy them.

03

How this fits with other research

Barton et al. (2019) later tried the same idea on real parent-FA graphs. They found NAP and Tau-U often disagreed with expert visual calls. Their message: pick one label system and stick to it, just as Rumen urged.

Dowdy et al. (2022) took the next step. They built R scripts that spot publication bias in single-case meta-analyses. Both papers share one goal: make our evidence harder to dismiss.

Matson et al. (2011) mapped 173 FA studies that could all benefit from Rumen’s trend-plus-bounce check. None used it yet, leaving a gap you can fill.

04

Why it matters

Journals now want numbers with your visual analysis. Adding Rumen’s split-middle trend and bounce takes five minutes. It gives reviewers a clear rule and saves you from endless revise-and-resubmit loops.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Open your last AB graph, draw the split-middle trend line, count the data points that cross it, and note the bounce range in your session notes.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprise ABAB and multiple-baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.

Behavior modification, 2014 · doi:10.1177/0145445514545679