These answers draw in part from “Lead with Procedural Integrity: The Importance of Investing in Performance Management for ABA Service Providers” by Patricia Glick, BCBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →The terms are often used interchangeably in the behavioral literature, but some authors distinguish them. Procedural integrity broadly refers to the degree to which a procedure is implemented as designed, while treatment fidelity emphasizes the implementation of a complete treatment protocol including all its components. In ABA clinical contexts, procedural integrity typically refers to the accurate execution of specific instructional or behavior management procedures, while fidelity may refer to the full implementation of a client's behavior intervention plan. For practical purposes in most ABA organizations, both terms describe the same measurement priority: verifying that procedures are executed as intended.
The organizational performance integrity (OPI) score is an aggregate measure of procedural integrity across providers, programs, and time within an organization. It is calculated by compiling individual integrity observation data from across the clinical team and producing a composite score that reflects the overall fidelity level of the organization's service delivery. Organizations can calculate OPI scores overall, by program type, by supervisor, or by client population depending on the specificity of information needed. The score provides a system-level view of implementation quality that is invisible in case-by-case integrity reviews.
When a client's graphed data show stalled or reversed progress, the integrity variable must be examined before considering program modifications. If recent integrity data show implementation levels below the threshold required for the procedure to function as designed — typically 80% or above is used as a minimum benchmark, though higher thresholds may be appropriate for specific procedures — the clinical response should be a return to training and supervision, not a program change. Only after ruling out implementation failure as the cause of stalled progress is it appropriate to consider modifying the intervention itself.
The most frequently cited barriers are time constraints, lack of trained observers, absence of standardized measurement tools, and organizational cultures that treat integrity observation as punitive rather than supportive. Time constraints can be addressed by stratifying measurement priority and using permanent product review for lower-risk procedures. Observer training is required but is a one-time investment that pays dividends across the full measurement system. Standardized tools can be developed from existing procedure task analyses. The cultural barrier — staff perception of integrity observation as surveillance — is addressed through transparent communication about the purpose of measurement and consistent use of data for coaching rather than punishment.
BACB Ethics Code (2022) Standards 2.05 and 2.06 require BCBAs to provide adequate training and to evaluate supervisee performance on an ongoing basis. Standard 2.19 requires that behavior change programs be implemented in a manner consistent with the scientific evidence. These standards together establish that BCBAs have an ethical obligation to monitor implementation quality — not just design effective programs. Without procedural integrity measurement, a BCBA cannot fulfill the obligation to ensure that programs are being implemented as designed and that supervisee performance meets the standards required for competent service delivery.
When OPI data are reviewed at the aggregate level, patterns emerge that are invisible in case-by-case analysis. If 60% of providers show low fidelity on a specific procedure step, this indicates a systemic training gap rather than isolated individual performance problems. OPI data can be disaggregated by procedure type, supervisor, program, or service location to identify where training resources should be targeted. This approach is more efficient than addressing implementation problems through individual performance management and more accurate in attributing problems to their actual source — often a training curriculum gap rather than individual staff inadequacy.
High-volume settings typically require a tiered measurement approach. Direct observation by a trained supervisor or designated integrity observer is the gold standard but is resource-intensive. Permanent product review — examining data sheets and session documentation for adherence indicators — provides lower-fidelity data but can be conducted at high volume without scheduling observer time. Video-assisted review, where session recordings are reviewed against procedure task analyses, offers a middle ground. Most well-designed integrity systems use direct observation for high-risk procedures and permanent product review for routine programming, reserving observer resources for the highest-priority measurement targets.
Performance feedback grounded in integrity data should be specific, immediate when possible, and delivered in a format that identifies the exact procedure step in question, describes what was observed, and models the correct implementation if correction is needed. Feedback should acknowledge correct implementation explicitly — not just address errors — to establish and maintain the reinforcement history for accurate implementation. Framing feedback as data-driven coaching rather than evaluation reduces the aversive properties of corrective feedback and increases the likelihood that it will be welcomed rather than avoided by the receiving staff member.
The frequency of integrity observation should be proportional to the clinical risk of the procedure and the implementation history of the provider. For new staff implementing high-risk procedures, weekly direct observation is appropriate. For experienced staff with established high-integrity implementation, monthly observations may be sufficient. At the organizational level, enough data should be collected to detect trends reliably — typically a minimum of one observation per provider per program per month for the OPI score to reflect genuine organizational performance rather than sampling noise. When OPI scores decline, observation frequency should increase to identify the source of the decline.
When integrity measurement systems are implemented with transparent communication about their purpose — supporting training, identifying systemic gaps, informing program decisions — they tend to improve staff satisfaction rather than diminish it. Staff in organizations with robust integrity systems report greater confidence in their own implementation, clearer performance expectations, and a stronger sense that clinical quality is taken seriously by organizational leadership. The key determinant of cultural impact is whether data are used for coaching or for surveillance. Organizations that consistently use integrity data to provide supportive, specific feedback create a clinical culture where measurement is associated with professional growth rather than evaluation anxiety.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Lead with Procedural Integrity: The Importance of Investing in Performance Management for ABA Service Providers — Patricia Glick · 1 BACB Supervision CEUs · $19.99
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
279 research articles with practitioner takeaways
258 research articles with practitioner takeaways
236 research articles with practitioner takeaways
1 BACB Supervision CEUs · $19.99 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.