Assessment & Research

Interobserver agreement and procedural fidelity: An odd asymmetry

Essig et al. (2023) · Journal of Applied Behavior Analysis 2023
★ The Verdict

Half of recent ABA studies skip procedural fidelity IOA, leaving a blind spot you can fix today.

✓ Read this if BCBAs who write, review, or supervise single-case research in any setting.
✗ Skip if Practitioners who only consume packaged trainings and never read or write studies.

01Research in Context

01

What this study did

Essig et al. (2023) read every experiment in JABA and Behavior Analysis in Practice from 2017 through 2021. They counted how many papers reported procedural fidelity and how many checked if two observers agreed on fidelity scores.

They also noted how often studies reported interobserver agreement (IOA) on client behavior. The goal was to see if researchers treat fidelity data the same way they treat behavior data.

02

What they found

Only half of the experiments mentioned procedural fidelity at all. When fidelity was tracked, fewer than 18% of papers showed IOA on those fidelity numbers.

In contrast, almost every paper showed IOA on client behavior. The gap shows we guard against observer error for behavior but not for how well we ran the procedure.

03

How this fits with other research

Bergmann et al. (2023) looked at the same journals and years and also found weak fidelity reporting. Their paper adds a handy checklist, while Essig et al. highlight the missing IOA layer. Together they form a full picture of the problem.

Jones et al. (1977) warned decades ago that IOA reports were too thin. Essig et al. show the field still skips IOA on fidelity, proving the old warning still bites.

Cox et al. (2025) give new IOA tools like precision and recall. These metrics could close the gap Essig et al. found, making fidelity checks as solid as behavior checks.

04

Why it matters

If we do not check fidelity IOA, we cannot be sure the treatment was done the same way every time. That weakens both internal and external validity. Next time you write or review a study, add a second observer to at least 20% of fidelity sessions and report the agreement. The extra step takes minutes and saves your data from silent drift.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pair a second staff member to score your next session’s fidelity sheet and compare answers before the day ends.

02At a glance

Intervention
not applicable
Design
systematic review
Finding
not reported

03Original abstract

We examined articles with experiments published in the Journal of Applied Behavior Analysis and in Behavior Analysis in Practice from 2017 through 2021 to determine how frequently procedural fidelity was assessed. When procedural fidelity was assessed, we determined how often a measure of interobserver agreement for those fidelity data was provided. We also determined how often a measure of interobserver agreement for participants' behavior was provided. Across both journals and all years, 54.7% of relevant articles provided a measure of procedural fidelity. Of them, 17.7% provided a measure of interobserver agreement for procedural fidelity. In marked contrast, 96.4% provided interobserver agreement data for participants' behavior. It is unfortunate that applied behavior analysts frequently fail to provide procedural fidelity data and, when they do, often fail to provide interobserver agreement data for the fidelity data. Reviewers for, and editors of, behavior‐analytic journals are encouraged to strongly consider the relative value of procedural fidelity and agreement on procedural fidelity measures when rendering recommendations on the suitability of a given submission.

Journal of Applied Behavior Analysis, 2023 · doi:10.1002/jaba.961