By Matt Harrington, BCBA · Behaviorist Book Club · April 2026 · 12 min read
Procedural integrity, also known as treatment fidelity or procedural fidelity, refers to the degree to which an intervention is implemented as designed. In applied behavior analysis, where the effectiveness of interventions depends on precise implementation of behavioral procedures, procedural integrity is not merely a methodological concern but a clinical and ethical imperative that directly affects client outcomes.
The clinical significance of procedural integrity spans multiple dimensions of behavior analytic practice. At the most fundamental level, when interventions are not implemented with fidelity, the data collected during those interventions cannot be reliably interpreted. A behavior analyst reviewing graphed data showing a lack of progress cannot determine whether the intervention is ineffective or whether the intervention was simply not implemented correctly. This ambiguity undermines data-based decision-making, which is the cornerstone of behavior analytic practice.
Beyond data interpretation, procedural integrity directly affects the outcomes that clients experience. Research consistently demonstrates that variations in implementation fidelity are associated with variations in treatment outcomes. When reinforcement schedules are delivered inconsistently, when prompting hierarchies are not followed accurately, or when antecedent modifications are not maintained across all relevant contexts, the effectiveness of the intervention is compromised. Clients may acquire skills more slowly, experience more exposure to error and failure, or fail to maintain gains that were achieved under conditions of high fidelity.
The connection between procedural integrity and ethical practice is multifaceted. The BACB Ethics Code (2022) requires behavior analysts to provide effective treatment, to base their clinical decisions on data, and to ensure that the professionals they supervise are competently implementing interventions. Each of these requirements presupposes that interventions are being implemented as designed. Without procedural integrity, treatment is unlikely to be maximally effective, data do not accurately reflect the impact of the intervention, and the quality of supervised implementation cannot be evaluated.
For behavior analysts, supervisees, and organizations, investing in procedural integrity systems yields benefits that extend well beyond individual client outcomes. It strengthens the quality of supervision by providing objective measures of implementation quality. It supports professional development by identifying specific areas where practitioners need additional training. It protects organizations from liability by documenting that evidence-based practices were implemented faithfully. And it advances the credibility of the field by demonstrating that behavioral interventions work when they are implemented correctly.
The concept of procedural integrity has been part of the applied behavior analysis literature since the field's earliest days, though it has not always received the attention it deserves in practice. The seminal article defining the dimensions of applied behavior analysis identified the technological dimension, which requires that procedures be described in sufficient detail to allow replication, as fundamental to the science. Procedural integrity is the practical realization of this dimension: it is the measurement of whether procedures are actually being replicated as described.
Despite its conceptual importance, surveys of published behavior analytic research have consistently shown that procedural integrity data are reported in a minority of studies. This gap between the recognized importance of procedural integrity and the frequency with which it is measured and reported has been noted as a significant limitation of the field's evidence base. If the published literature often neglects procedural integrity measurement, it is perhaps unsurprising that clinical practice often does as well.
In clinical settings, several barriers contribute to inadequate attention to procedural integrity. Time constraints make it challenging to conduct regular fidelity checks in addition to all the other responsibilities of supervision. Development of procedural integrity checklists requires investment in operationally defining each step of an intervention in observable and measurable terms. Training observers to reliably assess procedural integrity requires additional resources. And the culture of some organizations may not prioritize fidelity assessment as a core component of service delivery.
The supervision literature in behavior analysis has increasingly emphasized procedural integrity as a key component of effective supervision. Supervisors who regularly assess the procedural integrity of their supervisees' implementation can identify skill deficits early, provide targeted feedback, and document the competency development of practitioners in training. Without procedural integrity data, supervision risks becoming focused on what the supervisee reports rather than what they actually do, which limits its effectiveness.
The relationship between procedural integrity and treatment outcomes has been investigated across a range of interventions and populations. These studies generally confirm what behavioral theory would predict: that higher fidelity is associated with better outcomes, that the relationship between fidelity and outcomes may be nonlinear with critical thresholds below which outcomes deteriorate markedly, and that different components of multi-component interventions may have different fidelity-outcome relationships.
Organizational factors play a significant role in determining whether procedural integrity is prioritized. Organizations that invest in clear protocol development, systematic training, regular fidelity assessment, and performance feedback systems tend to achieve higher levels of procedural integrity than organizations that leave implementation quality to the individual practitioner. This organizational investment reflects a recognition that procedural integrity is a systems-level variable, not just an individual-level one.
Implementing robust procedural integrity systems in clinical practice requires attention to several interconnected processes: protocol development, training, measurement, feedback, and ongoing monitoring. Each of these processes has specific clinical implications that behavior analysts should consider.
Protocol development is the foundation of procedural integrity. Every intervention should have a written protocol that specifies each step of the procedure in observable, measurable terms. These protocols should be detailed enough that a trained implementer can follow them without ambiguity, but practical enough that they can be used in real-world clinical contexts. Common protocol elements include the setting and materials required, the antecedent conditions that should be in place, the specific responses to target behavior and non-target behavior, the reinforcement schedule and delivery method, the prompting hierarchy and prompt fading criteria, and the data collection procedures.
Training implementers to criterion is the next critical step. Before any implementer begins delivering an intervention with a client, they should demonstrate mastery of the intervention protocol through role-play and practice. Training should continue until the implementer achieves a predetermined fidelity criterion, often 80 to 90 percent accuracy across all protocol steps. This criterion-based training approach ensures that implementation quality starts at an acceptable level and reduces the likelihood that initial low-fidelity implementation will produce poor client outcomes or confound early data.
Measurement of procedural integrity should be conducted regularly throughout the course of treatment, not just during initial training. Fidelity checks should be conducted using direct observation, ideally by a supervisor or trained observer who is not the implementer. The frequency of fidelity checks should be determined by the complexity and risk level of the intervention, the experience level of the implementer, and any recent changes in the protocol or the implementer's performance.
Procedural integrity data should be collected using structured checklists or rating scales that correspond to the steps of the intervention protocol. Each step should be scored as implemented correctly, implemented incorrectly, or not applicable. The overall procedural integrity score is typically expressed as the percentage of steps implemented correctly out of the total steps applicable.
Feedback based on procedural integrity data is the mechanism through which measurement translates into improved practice. Feedback should be specific, timely, and focused on both strengths and areas for improvement. Research on performance feedback in organizational behavior management suggests that feedback is most effective when it is delivered soon after the observation, includes specific examples of correct and incorrect implementation, provides a clear plan for addressing any deficits, and is delivered in a supportive rather than punitive manner.
When procedural integrity data reveal persistent deficits, the behavior analyst should investigate the causes rather than simply providing additional training. Common causes of low fidelity include insufficient understanding of the protocol rationale, protocols that are too complex for the implementer's skill level, environmental barriers to correct implementation, competing contingencies in the work environment that reinforce shortcuts, and emotional or motivational factors such as burnout or disagreement with the intervention approach. Each of these causes requires a different response, and addressing the root cause is more effective than repeated retraining.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Procedural integrity intersects with multiple provisions of the BACB Ethics Code (2022), establishing it as a core ethical responsibility rather than an optional quality improvement measure.
Code 2.01 (Providing Effective Treatment) is perhaps the most directly relevant standard. Behavior analysts cannot claim to be providing effective, evidence-based treatment if the interventions are not being implemented as designed. The evidence base for behavioral interventions was established under conditions of high procedural integrity. When interventions are implemented with low fidelity in clinical practice, the outcomes cannot be expected to match those reported in the research literature. A behavior analyst who does not assess procedural integrity has no basis for knowing whether their treatment is being delivered as intended, which calls into question whether they are meeting this ethical standard.
Code 4.07 (Supervision Conditions) and related supervision standards require behavior analysts to provide competent supervision that ensures the quality of services delivered by supervisees. Procedural integrity measurement is arguably the most objective and direct method of assessing whether supervisees are implementing interventions correctly. Supervisors who rely solely on supervisee self-report, record review, or indirect measures may miss significant implementation errors that would be captured by direct observation and fidelity assessment.
Code 2.13 (Selecting Behavior-Change Interventions) requires that interventions be selected based on empirical evidence and implemented in accordance with the research from which they were derived. This standard implies that the implementation should match the procedures described in the evidence base, which is precisely what procedural integrity ensures. An intervention that was effective in published research but is implemented differently in clinical practice may not produce the expected outcomes, and the practitioner cannot evaluate whether the intervention is appropriate without knowing whether it was implemented correctly.
Code 3.01 (Behavior-Analytic Assessment) and related standards require accurate data collection and interpretation. When procedural integrity is low, the data collected during intervention sessions are compromised because the independent variable, the intervention, is not being delivered consistently. Interpreting these data as if the intervention were being implemented with fidelity leads to erroneous conclusions about treatment effectiveness. This has ethical implications for data-based decision-making, including decisions about whether to continue, modify, or discontinue interventions.
The ethical dimension of documentation is also relevant. Procedural integrity data provide objective documentation that evidence-based practices were implemented as designed. In the event of a complaint, an audit, or a legal proceeding, having systematic procedural integrity data demonstrates a commitment to quality and accountability that protects both the practitioner and the client.
Code 2.14 requires behavior analysts to prioritize positive reinforcement-based approaches. This applies not only to client interventions but also to the systems used to maintain procedural integrity among staff. Punitive approaches to fidelity failures, such as disciplinary actions without supportive feedback and retraining, are less likely to produce sustained improvement and may create a culture of concealment rather than transparency. Ethical integrity management uses the same reinforcement-based principles that guide client intervention.
Finally, there is an ethical obligation of transparency with clients and families regarding procedural integrity. Families have a right to know that the interventions being delivered to their family member are being monitored for quality and that systems are in place to ensure consistent implementation. Sharing procedural integrity data with families, when appropriate, demonstrates accountability and builds trust in the therapeutic relationship.
Effective decision-making about procedural integrity requires behavior analysts to develop systems for when, how, and how often to assess fidelity, and to establish clear decision rules for responding to the data these systems produce.
The first decision is which interventions to prioritize for procedural integrity assessment. In an ideal world, every intervention for every client would be assessed regularly. In practice, resources are limited, and behavior analysts must prioritize. Factors that increase the priority for fidelity assessment include the complexity of the intervention protocol, the risk level of the intervention such as procedures involving extinction, prompting, or response blocking, the experience level of the implementer, recent changes to the protocol, and any concerns about implementation quality raised by data patterns or staff reports.
The frequency of fidelity assessment should be guided by a risk-based framework. New implementers, new protocols, and high-risk interventions warrant more frequent assessment, potentially every session initially and then fading to weekly or biweekly as performance stabilizes. Experienced implementers working with well-established protocols may require less frequent assessment, but should never go entirely unmonitored as procedural drift can occur gradually over time.
Decision rules for responding to procedural integrity data should be established in advance and communicated to all team members. A common framework includes maintaining current procedures when fidelity is above the predetermined criterion (often 80 to 90 percent), providing targeted feedback and brief retraining when fidelity falls below criterion but above a critical threshold, and providing comprehensive retraining with increased monitoring when fidelity falls below the critical threshold. These thresholds should be informed by the research on fidelity-outcome relationships and by the specific risk profile of each intervention.
When treatment data show poor client progress, the first diagnostic question should be whether the intervention is being implemented with adequate fidelity. Before concluding that an intervention is ineffective and needs to be changed, the behavior analyst should verify that the intervention was actually delivered as designed. If fidelity data are not available, this verification is impossible, and the risk of prematurely abandoning an effective intervention increases.
Conversely, when treatment data show good client progress, procedural integrity data confirm that the improvement is attributable to the intervention rather than to other variables. This attribution is important for making generalization decisions, for training new staff, and for building the evidence base for particular interventions within the practice setting.
Self-assessment of procedural integrity is another important component of decision-making. Behavior analysts should periodically assess the fidelity of their own practice, including the consistency of their supervision activities, the accuracy of their data analysis, and the fidelity of their assessment procedures. This self-assessment supports professional growth and models the importance of fidelity assessment for supervisees.
Organizational decision-making should incorporate aggregate procedural integrity data. Patterns of low fidelity across multiple implementers or interventions may indicate systemic issues such as inadequate training resources, unrealistic caseload demands, or organizational policies that create barriers to correct implementation. Addressing these systemic factors produces more durable improvement than addressing individual performance alone.
Implementing procedural integrity in your daily practice does not have to be overwhelming. Start with one concrete step and build from there.
If you do not currently have written protocols for your interventions, begin by selecting one or two high-priority interventions per client and developing detailed, step-by-step protocols. Format these protocols as checklists that can be used during direct observation. Share these protocols with all implementers and discuss them during supervision.
If you have protocols but do not regularly assess fidelity, begin by scheduling one direct observation per week per supervisee. Use the protocol checklist to score implementation during the observation, calculate a fidelity percentage, and provide specific feedback. Even this modest level of assessment will dramatically improve your understanding of what is actually happening during sessions.
Integrate procedural integrity data into your treatment review process. When reviewing client progress data, include the corresponding fidelity data and consider whether implementation quality may be contributing to the patterns you observe. This practice will improve the accuracy of your data-based decisions.
Use procedural integrity assessment as a professional development tool. When fidelity checks reveal consistent errors in a particular area, this identifies a specific training need that you can address through targeted coaching, modeling, and practice. This is far more efficient and effective than generic continuing education.
Finally, share the purpose and results of procedural integrity assessment with your team in a way that frames it as a support tool rather than a surveillance tool. When implementers understand that fidelity assessment is designed to help them do their jobs well and to ensure that clients receive the best possible services, they are more likely to engage with the process openly and constructively.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Procedural Integrity and Ethical Practice — Robert Wallander · 1.5 BACB Ethics CEUs · $15
Take This Course →All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.