A Comparison of Single-Case Effect Measures Using Check-In Check-Out Data.
Your choice of effect ruler can flip the same CICO data from small to large even when everyone agrees it works.
01Research in Context
What this study did
The team looked at every Check-In Check-Out single-case study they could find. They pulled out seven common ways to measure how well CICO works.
They ran the same numbers through each formula. The goal was to see if one method would call CICO "huge" while another called it "tiny."
What they found
All seven formulas agreed CICO helps. Yet the size of the benefit jumped from "small" to "large" just by picking a different ruler.
That means two BCBAs could read the same kid’s chart and walk away with opposite ideas about how strong the intervention is.
How this fits with other research
Sottilare et al. (2023) is one of the very CICO studies inside this meta. Their classroom data look great, but the new paper shows the same data could be sold as either a medium or a huge win depending on the formula you pick.
McIntyre et al. (2017) warned us that CICO evidence needs tighter standards. Andrews et al. (2024) answers that call by showing the standards must include picking one effect measure and sticking to it.
Veenman et al. (2018) found small benefits for other classroom programs. Their work used group designs, so the size jump seen here may be a single-case issue, not a CICO issue.
Why it matters
If you write an evaluation report, pick one effect measure and name it. Tell the IEP team whether you used Tau-U, PND, or another ruler so they know why your number differs from last year’s. When you read someone else’s CICO study, flip to the method section first. If they used a ruler that tends to inflate size, take the "large" claim with a grain of salt. Consistency beats sparkle.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one single-case effect measure and use it for every CICO chart this month so your reports stay consistent.
02At a glance
03Original abstract
There are numerous effect measures researchers can select when conducting a meta-analysis of single-case experimental design research. These effect measures model different characteristics of the data, so it is possible that a researcher's choice of an effect measure could lead to different conclusions about the same intervention. The current study investigated the impact of effect measure selection on conclusions about the effectiveness of check-in check-out (CICO), a commonly used intervention within School-Wide Positive Behavior Interventions and Supports. Using a multilevel meta-analysis of seven different effect measures across 95 cases in 22 studies, findings suggested that all effect measures indicated statistically significant results of CICO in improving student behavior. However, the magnitude of the effects varied when comparing the results to interpretive guidelines, suggesting that the selection of effect measures may impact conclusions regarding the extent to which an intervention is effective. Implications, limitations, and future directions are discussed.
Behavior modification, 2024 · doi:10.1177/01454455241233738