By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts
Procedural integrity (also called treatment integrity or implementation fidelity) refers to the degree to which an intervention is implemented according to its written design. In clinical ABA, it matters because behavior change data collected in the context of low implementation fidelity is difficult or impossible to interpret: if a client is not making progress, low integrity means the clinician cannot determine whether the procedure failed or whether an unpracticed version of the procedure failed. Integrity data disambiguates this question, allowing data-based decisions about whether to modify the procedure, provide additional staff training, or adjust environmental conditions. Without it, clinical decision-making is fundamentally compromised.
The primary measurement approaches are: direct observation (a trained observer records adherence to each procedural step in real time during a session), self-monitoring (the implementer records their own adherence using a procedural checklist), permanent product review (reviewing session notes, data sheets, or video recordings for evidence of correct implementation), and structured integrity probes (brief observations conducted on a scheduled or unscheduled basis). Each approach has tradeoffs between accuracy and feasibility. Direct observation provides the most accurate data but requires trained observers with available time. Permanent product sampling provides scalable measurement but is limited to what the product actually captures about implementation behavior.
Prioritize based on clinical consequence and implementation risk. Highest priority should go to procedures with direct client safety implications (blocking, extinction, crisis protocols), recently trained or modified procedures where drift is most likely, and procedures associated with treatment areas where client progress has stalled. Secondary priority goes to high-frequency procedures on which drift is likely to accumulate, and procedures implemented by staff whose training history or performance data suggests elevated risk. The goal is not comprehensive coverage initially but targeted monitoring where integrity failures carry the highest clinical cost.
Frame integrity data as clinical information rather than evidence of wrongdoing. When presenting integrity data to a staff member, do so with the explicit frame that you are trying to understand what environmental variables are contributing to the implementation pattern — not assigning blame. Follow this with a structured functional assessment: Is the procedure unclear? Were training opportunities sufficient? Are materials available? Is there a consequence misalignment? Address the identified variable first. Only after environmental variables have been addressed and integrity remains low should the response shift toward performance consequence frameworks. The OPI Task Force explicitly recommends compassionate, functionally-grounded correction.
Implementation drift is the gradual departure from procedural fidelity that occurs as novel procedures become routine. As procedures become automatized, implementers tend to omit steps they perceive as less critical, simplify complex sequences, and adapt procedures to their own style. Drift is not typically deliberate but is a predictable behavioral outcome when reinforcement for precise implementation decreases over time. Prevention requires scheduled integrity monitoring that does not rely on reactive assessment only — periodic probes at regular intervals detect drift before it becomes severe. When probes reveal emerging drift, brief targeted retraining or increased feedback for the drifting components arrests the trend before it requires extensive intervention.
Self-monitoring can be a useful integrity measurement tool when it is structured carefully. Reliability of self-monitoring improves when: the self-monitoring form uses specific, observable behavioral definitions rather than general ratings; the staff member has received training on how to use the form; a comparison between self-monitoring and direct observation data has been established to assess concordance; and self-monitoring records are reviewed by a supervisor on a regular schedule (creating an accountability contingency). Unstructured self-monitoring — asking staff to report whether they implemented the procedure correctly — produces systematically inflated estimates. The form design and the accountability structure determine whether self-monitoring is useful.
Multiple 2022 Ethics Code standards implicate procedural integrity. Standard 2.01 (Providing Effective Treatment) requires BCBAs to use evidence-based procedures and monitor their effectiveness — validity requires knowing what was actually implemented. Standard 2.04 (Ongoing Data Collection) requires continuous data collection throughout service delivery, which includes integrity data as a component of comprehensive monitoring. Standard 4.07 (Performance Feedback) requires behavior-analytic feedback to supervisees — integrity data provides the specific behavioral basis for this feedback. Standard 1.01 (Being Truthful) is implicated when BCBAs report on intervention outcomes without disclosing that integrity was not assessed.
Build integrity assessment into existing supervisory activities rather than adding separate measurement occasions. Supervision observations can double as integrity probes when a structured observation form is used. Build brief integrity checklists into session note templates so that implementers document completion of key procedural steps as part of their standard documentation. Use video recordings when feasible for asynchronous review — one supervisor can review multiple sessions without being physically present. Develop a tiered measurement approach: intensive observation for high-priority procedures, permanent product review for stable procedures, and periodic brief probes for all others. The goal is a sustainable system, not a comprehensive one.
The distinction requires integrity data. When a client's progress graph shows a plateau or deteriorating trend, the clinical decision-making process should first assess whether the procedure is being implemented as designed. If integrity data shows implementation is at criterion, the interpretation is that the procedure itself may be insufficient or mismatched to the current function — a programming decision is indicated. If integrity data shows implementation is below criterion, the interpretation is that the procedure has not been adequately tested — a training or environmental support decision is indicated. Without this data, revising the program in response to low integrity is a common clinical error that delays effective intervention.
Organizations that systematically support integrity monitoring typically share several structural features: written integrity monitoring protocols specifying which procedures require monitoring, at what frequency, and by whom; trained integrity observers who are distinct from the primary service providers; electronic data systems that flag when scheduled integrity probes have not occurred; aggregate integrity reporting to clinical directors that enables organizational-level trend analysis; and supervision processes that routinely include integrity data review alongside client outcome data. Integrity monitoring is most consistent when it is built into organizational workflows rather than depending on individual supervisors to initiate it voluntarily.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Prioritizing Procedural Integrity in Service Settings: Insights from the OPI Task Force — Florence DiGennaro Reed · 1 BACB Supervision CEUs · $0
Take This Course →1 BACB Supervision CEUs · $0 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.