Performance criteria‐based effect size (<scp>PCES</scp>) measurement of single‐case experimental designs: A real‐world data study
Use PCES to measure progress against the exact performance goal you promised, not just against baseline.
01Research in Context
What this study did
The authors built a new ruler for single-case data. They call it PCES.
PCES scores each case against the exact performance goal you set before treatment.
They ran PCES on 88 real ABA graphs. Then they compared the numbers to older overlap rules like PND and PEM.
What they found
PCES gave only weak-to-moderate agreement with the old indices.
The gap shows that meeting a preset goal and simply beating baseline are two different things.
In short, a graph can look "good" by old math yet still miss the client’s true target.
How this fits with other research
Cook et al. (2020) also fixed SCED math. They showed how to catch drift in momentary time-sampling. PCES adds another layer: judge the data against the aim you wrote in the BSP.
Newland (2024) fixed risk-ratio formulas for the same reason—old stats can mislead. PCES follows that trail by replacing pure overlap with goal-linked numbers.
Rider (1977) warned us to pick the right reliability tool for the right level. PCES extends that spirit: pick an effect ruler that matches the clinical criterion, not just the one that is easy to compute.
Why it matters
You already write mastery criteria in the plan. PCES lets you turn that sentence into a number you can graph, report, and replicate. Next time you review a case, ask: "Did we hit the real goal, or just beat baseline?" If the answer matters to the family, PCES gives you the math to prove it.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add one extra row to your Excel sheet: divide client performance by the preset goal and multiply by 100—that quick PCES tells you if you are truly done.
02At a glance
03Original abstract
Visual analysis and nonoverlap-based effect sizes are predominantly used in analyzing single case experimental designs (SCEDs). Although they are popular analytical methods for SCEDs, they have certain limitations. In this study, a new effect size calculation model for SCEDs, named performance criteria-based effect size (PCES), is proposed considering the limitations of 4 nonoverlap-based effect size measures, widely accepted in the literature and that blend well with visual analysis. In the field test of PCES, actual data from published studies were utilized, and the relations between PCES, visual analysis, and the 4 nonoverlap-based methods were examined. In determining the data to be used in the field test, 1,052 tiers (AB phases) were identified from 6 journals. The results revealed a weak or moderate relation between PCES and nonoverlap-based methods due to its focus on performance criteria. Although PCES has some weaknesses, it promises to eliminate the causes that may create issues in nonoverlap-based methods, using quantitative data to determine socially important changes in behavior and to complement visual analysis.
Journal of Applied Behavior Analysis, 2022 · doi:10.1002/jaba.928