Assessment & Research

An Algorithm to Evaluate Methodological Rigor and Risk of Bias in Single-Case Studies.

Perdices et al. (2023) · Behavior modification 2023
★ The Verdict

A new RoBiNT algorithm sorts single-case studies into six rigor levels—use it to weed out weak evidence before you trust the results.

✓ Read this if BCBAs who write evidence reviews or pick interventions for school or clinic.
✗ Skip if Practitioners who only use group-design findings.

01Research in Context

01

What this study did

Davidovitch et al. (2023) built a computer checklist that scores how strong any single-case study is.

They fed 46 published single-case designs into the tool.

The program spit out six rigor levels, from “Very Low” to “High,” using the RoBiNT rules you already know.

02

What they found

Almost half the studies landed in the “Very Low” box.

The algorithm’s ratings matched the What Works Clearinghouse step-by-step review almost perfectly.

A short checklist now gives you the same verdict as a long expert panel.

03

How this fits with other research

Tincani et al. (2016) meta-analyzed 18 pacing studies. Those 18 trials are exactly the type this algorithm can now grade in minutes.

Ball et al. (1985) warned that creativity studies have “methodological flaws.” The new tool turns that old warning into a clear 1-to-6 score.

Sham et al. (2014) showed published PRT studies over-rate themselves by 22 PND points. The algorithm can flag the weak ones before you count them in a review.

04

Why it matters

Next time you shop the literature for an evidence-based choice, run the article through the RoBiNT algorithm first. If it lands in “Low” or “Very Low,” treat the data as tentative. Share the score with parents and teams so everyone sees why you are cautious. The tool keeps you from building treatment on shaky ground.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Run your last three favorite SCD articles through the free RoBiNT algorithm and note the rigor level in your reference list.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Critical appraisal scales play an important role in evaluating methodological rigor (MR) of between-groups and single-case designs (SCDs). For intervention research this forms an essential basis for ascertaining the strength of evidence. Yet, few such scales provide classifications that take into account the differential weighting of items contributing to internal validity. This study aimed to develop an algorithm derived from the Risk of Bias in N-of-1 Trials (RoBiNT) Scale to classify MR and risk of bias magnitude in SCDs. The algorithm was applied to 46 SCD experiments. Two experiments (4%) were classified as Very High MR, 14 (30%) as High, 5 (11%) as Moderate, 2 (4%) as Fair, 2 (4%) as Low, and 21 (46%) as Very Low. These proportions were comparable to the What Works Clearinghouse classifications: 13 (28%) met standards, 8 (17%) met standards with reservations, and 25 (54%) did not meet standards. There was strong association between the two classification systems.

Behavior modification, 2023 · doi:10.1177/0145445519863035