Assessment & Research

From Percentages to Precision: Using Response Rates to Advance Analyses of Procedural Fidelity

St. Peter et al. (2025) · Perspectives on Behavior Science 2025
★ The Verdict

Track how fast you run trials, not just whether you tick every step, to catch hidden fidelity drift.

✓ Read this if BCBAs who supervise RBTs or train staff in clinics and schools.
✗ Skip if Practitioners looking for ready-made client interventions rather than measurement tips.

01Research in Context

01

What this study did

St. D'Incal et al. (2025) wrote a how-to paper, not an experiment. They looked at how BCBAs usually track fidelity and said, 'Add speed counts.'

The authors explain why percent-correct scores can hide drift. They give formulas for turns-per-minute and responses-per-minute so you catch slow, sloppy delivery.

02

What they found

The paper shows two made-up graphs. One uses only percent fidelity and looks perfect at 100%. The same data with a rate line shows the teacher waited ten seconds between prompts.

Their point: rate metrics spotlight timing errors that percentages miss.

03

How this fits with other research

Abuin et al. (2026) ran an experiment that backs this up. They found 50% fidelity delivered faster beat 100% slow fidelity. The rate lens the target paper pushes would have spotted that speed edge.

Faso et al. (2016) offered a success-rate rule for choosing interventions. St. Peter adds a rate rule for judging how well you carry those interventions out.

Stolz (1977) warned that old studies rarely checked reliability each phase. The new paper widens the same worry to fidelity: a flat percent agreement can still mask slow, uneven delivery.

04

Why it matters

Next time you score fidelity, add one quick column: count how many learning trials occur in each five-minute block. If the count drops but percent correct stays high, you have silent drift to fix on the spot.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

During your next session, time ten trials and write trials-per-minute next to your fidelity checklist.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

In some domains of behavior analysis, summarizing data as a percentage is nearly ubiquitous. This is certainly the case when behavior analysts report data about procedural fidelity (the extent to which procedures are implemented as designed); fidelity data were reported solely as percentage in 423 of 425 recent studies published in the Journal of Applied Behavior Analysis. In this article, we critically examine the use of percentage, especially in the context of analyzing procedural-fidelity data. We demonstrate how exclusive reliance on percentage can obscure important nuances in fidelity data and how adding response rate as a metric offers a more precise understanding. To illustrate our points, we include reanalyzed data from a recent evaluation of procedural fidelity in public schools. We conclude with practical recommendations for adopting rate as a metric in the analysis of procedural-fidelity data, thereby building on contributions of notable behavior analysts like Henry Pennypacker, who prioritized continuous, dimensional approaches to measurement. The online version contains supplementary material available at 10.1007/s40614-025-00433-9.

Perspectives on Behavior Science, 2025 · doi:10.1007/s40614-025-00433-9