Starts in:

Procedural Fidelity in Behavior Intervention: Clinical Questions for BCBAs and Supervisors

Source & Transformation

These answers draw in part from “Invited Address: Detecting and Managing Effects of Procedural Fidelity Errors” by Claire St. Peter (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
Questions Covered
  1. What is procedural fidelity and why does it matter for interpreting behavioral data?
  2. What are the most common types of procedural fidelity errors in ABA behavior-reduction procedures?
  3. How does extremely low fidelity affect treatment outcomes differently depending on reinforcement function?
  4. What does an operationally defined fidelity checklist look like and how is it developed?
  5. How can BCBAs make fidelity assessment feasible when caseloads are large?
  6. What should a BCBA do when fidelity data shows a staff member is implementing a behavior-reduction procedure with very low fidelity?
  7. What is the difference between component fidelity and session fidelity, and which is more clinically meaningful?
  8. How does fidelity measurement fit into the written behavior plan?
  9. Can high fidelity ever be clinically problematic?
  10. What does the research say about the relationship between fidelity training and clinical outcomes?
Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

1. What is procedural fidelity and why does it matter for interpreting behavioral data?

Procedural fidelity refers to the degree to which an intervention is implemented as designed — specifically, the proportion of procedure components that are implemented correctly across all applicable opportunities. It matters for data interpretation because behavioral data reflects the intervention actually delivered, not the intervention designed. When fidelity is low, a client's behavioral data may show no change because the intervention was ineffective, or because the intervention was not implemented consistently enough to test its effectiveness. Without fidelity data, these two interpretations are indistinguishable, which means treatment decisions — to continue, modify, or abandon an intervention — may be made on an incorrect premise.

2. What are the most common types of procedural fidelity errors in ABA behavior-reduction procedures?

Common fidelity errors in behavior-reduction procedures include: implementing extinction inconsistently (providing access to the reinforcer identified in the FA on some occurrences of the target behavior); delivering the reinforcer intended for the alternative response following instances of the target behavior; failing to apply the extinction schedule to all topographies of the target behavior (responding to some forms while extinguishing others); implementing prompt fading steps incorrectly (fading too quickly or staying at a prompt level too long); and failing to conduct trials during high-MO states when the assessment specified MO-dependent implementation. Errors of omission (failing to implement a component) and errors of commission (implementing an incorrect procedure) have different clinical consequences and may require different supervisory responses.

3. How does extremely low fidelity affect treatment outcomes differently depending on reinforcement function?

St. Peter's research has documented that extremely low fidelity produces different outcome patterns depending on whether behavior is maintained by positive or negative reinforcement. For positively reinforced behavior under extinction, very low fidelity effectively creates intermittent reinforcement, which can increase behavioral persistence relative to baseline. For negatively reinforced behavior, the clinical picture is more complex — errors that inadvertently allow escape from aversives may reinforce the behavior on an intermittent schedule, or may inadvertently reinforce low-intensity precursor behaviors that function as mands for escape. The practical implication is that behavior maintained by negative reinforcement may be more vulnerable to fidelity errors in specific ways, which should inform how supervisors prioritize fidelity monitoring across their caseloads.

4. What does an operationally defined fidelity checklist look like and how is it developed?

An operationally defined fidelity checklist specifies each component of the procedure with enough behavioral specificity that two independent observers would consistently agree on whether it was implemented correctly. For a discrete trial instruction (DTI) fidelity checklist, this means specifying: the SD was delivered exactly as written (not paraphrased); a 3-second response interval was allowed before consequence delivery; reinforcement was delivered within 2 seconds of a correct response with the specified reinforcer; the correction procedure was implemented within 2 seconds of an error and followed the specified error correction format. Each component should be observable — visible to an external observer, not inferred from outcome. Development involves working through the procedure step by step, writing an observable description of what correct implementation looks like at each step, and pilot testing inter-observer agreement before deploying the checklist.

5. How can BCBAs make fidelity assessment feasible when caseloads are large?

Several strategies make systematic fidelity monitoring feasible without direct observation of every session: interval-based sampling (observe 10 consecutive trials at two points during a session rather than all trials); technology-assisted review (session recordings reviewed against a fidelity checklist in less time than a full live observation); permanent product fidelity assessment (reviewing data sheets, session notes, or completed checklists as proxies for direct observation where appropriate); and fidelity priority tiering (allocating the most intensive observation to new staff, new procedures, or clients showing unexpected treatment response). The goal is not perfect fidelity data for every session — it is sufficient data to detect meaningful implementation patterns and respond before low fidelity significantly affects client outcomes.

6. What should a BCBA do when fidelity data shows a staff member is implementing a behavior-reduction procedure with very low fidelity?

The response follows the same performance analysis logic as any staff performance problem: determine why fidelity is low before selecting an intervention. Is it a skill deficit (the correct procedure was not fluently trained)? An antecedent problem (the procedure steps are not accessible during the session)? A consequence problem (incorrect implementation has not produced a consequence differential)? Match the intervention to the cause. If low fidelity is producing clinical harm — creating intermittent reinforcement schedules for a dangerous behavior — the intervention should also include a temporary clinical response: consider whether to place the procedure on hold until adequate fidelity can be established, or increase supervision intensity to provide more immediate corrective feedback.

7. What is the difference between component fidelity and session fidelity, and which is more clinically meaningful?

Component fidelity refers to the accuracy of specific procedure elements across the instances where they applied — what proportion of extinction trials were implemented correctly, what proportion of FCT prompts were delivered at the correct interval. Session fidelity is a summary score across all procedure components for a session — what percentage of all procedure steps were completed correctly. Component fidelity is generally more clinically meaningful because it allows identification of which specific elements are being implemented incorrectly, which is necessary for targeted feedback. Session fidelity is useful as a global indicator but can obscure critical errors: a session where most components are implemented correctly but extinction is applied with 50% fidelity would show an acceptable session score while masking a clinically serious implementation failure.

8. How does fidelity measurement fit into the written behavior plan?

Fidelity measurement should be specified in the behavior plan itself, not treated as an external quality assurance activity. Including fidelity checklists as plan components, specifying the fidelity monitoring schedule, and identifying the criterion for adequate fidelity (the threshold below which plan components should not be interpreted as having been tested) makes fidelity an explicit part of the clinical design. Plans that specify only procedures without specifying measurement and monitoring protocols are incomplete from both a scientific and a supervisory standpoint. The fidelity monitoring specification also provides the basis for honest communication with families and funding sources about what implementation monitoring actually looks like in the clinical setting.

9. Can high fidelity ever be clinically problematic?

Yes. Rigid adherence to a written procedure that is producing adverse outcomes — when flexibility would produce better results — is a form of treatment failure even if fidelity is technically high. Procedures that include built-in decision rules for how to respond to specific clinical events (what to do when the client escalates, when the MO appears absent, when a novel behavior topography emerges) require practitioner judgment alongside procedural fidelity. A practitioner who implements the written procedure with high fidelity but fails to exercise the judgment the plan requires is technically compliant but clinically suboptimal. This is why fidelity assessment should measure fidelity to the entire written plan, including its decision rules, not only to its prescribed steps.

10. What does the research say about the relationship between fidelity training and clinical outcomes?

The research literature supports a direct relationship between training for fidelity — including BST-based implementation training, fidelity checklists, and supervised feedback — and both implementation fidelity scores and client behavioral outcomes. Studies have documented that programs implementing staff training specifically designed to increase procedural fidelity produce faster skill acquisition and more rapid behavior reduction compared to programs with less structured implementation training. The mechanism is straightforward: higher fidelity means the intervention is tested more consistently, the data more accurately reflects the intervention effect, and clinical decisions are made on more valid information. St. Peter's work extends this by specifying the conditions under which low fidelity is most clinically consequential, allowing for more targeted application of training and monitoring resources.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.

Invited Address: Detecting and Managing Effects of Procedural Fidelity Errors — Claire St. Peter · 1.5 BACB Supervision CEUs · $25

Take This Course →
📚 Browse All 60+ Free CEUs — ethics, supervision & clinical topics in The ABA Clubhouse

Research Explore the Evidence

We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Measurement and Evidence Quality

279 research articles with practitioner takeaways

View Research →

Symptom Screening and Profile Matching

258 research articles with practitioner takeaways

View Research →

Brief Behavior Assessment and Treatment Matching

252 research articles with practitioner takeaways

View Research →

Related Topics

CEU Course: Invited Address: Detecting and Managing Effects of Procedural Fidelity Errors

1.5 BACB Supervision CEUs · $25 · BehaviorLive

Guide: Detecting and Managing Effects of Procedural Fidelity Errors — What Every BCBA Needs to Know

Research-backed educational guide with practice recommendations

Decision Guide: Comparing Approaches

Side-by-side comparison with clinical decision framework

CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics