By Matt Harrington, BCBA · Behaviorist Book Club · April 2026 · 12 min read
Treatment integrity — the degree to which an intervention is implemented as designed — is the link between what is written in a behavior intervention plan and what actually happens during a session. A treatment plan may be technically excellent: grounded in a valid functional assessment, aligned with the research literature, individualized to the client's needs, and written with precision. But if that plan is implemented inconsistently, inaccurately, or incompletely, the outcomes it predicts will not be obtained. Treatment integrity is the bridge between the plan and the outcome.
Despite its fundamental importance, treatment integrity is inconsistently measured and reported in both the research literature and clinical practice. Published studies in JABA frequently omit treatment integrity data or report it in ways that preclude meaningful interpretation. In clinical settings, treatment integrity measurement is often either absent or limited to informal supervisor observation without systematic data collection. This gap matters because we cannot distinguish the effects of the treatment from the effects of how the treatment was implemented unless we measure both.
This course, presented by Dr. Kerry Ann Conde and Dr. Florence DiGennaro Reed — both BCBA-Ds and experts in organizational behavior management — approaches treatment integrity from a systems perspective. Their focus is not just on why treatment integrity matters (which most BCBAs intuitively understand) but on how to build organizational systems that make accurate measurement and management of treatment integrity feasible in real clinical settings where time, resources, and staff attention are all constrained.
The barriers to measuring treatment integrity in practice are real and varied: insufficient time, staff concerns about being evaluated, lack of clarity about what should be measured and how, and limited technology for efficient data collection. Addressing these barriers requires both technical solutions (efficient measurement systems) and cultural ones (creating a team environment where treatment integrity data is viewed as useful information rather than a surveillance tool).
The treatment integrity literature in ABA has its roots in both the basic research on procedural fidelity and the applied research on staff training and performance management. Early work by DiGennaro Reed and colleagues established the relationship between treatment integrity levels and client outcomes — demonstrating empirically that higher integrity is associated with better treatment effects, faster skill acquisition, and more stable behavior change. This relationship is not surprising from a behavioral standpoint: if a reinforcement-based procedure is only implemented 50% of the time, the client is receiving reinforcement on a thinned and unpredictable schedule that may maintain the behavior rather than building it.
The OBM literature provides the framework for understanding treatment integrity as a systems problem rather than an individual performance problem. When treatment integrity is chronically low across a team, the solution is rarely simply to instruct staff to implement procedures more carefully. The solution involves examining the antecedent conditions that support accurate implementation (clear protocols, adequate training, accessible materials), the consequence conditions that maintain accurate implementation (specific and timely feedback, acknowledgment of accurate performance), and the environmental conditions that may interfere with implementation (interruptions, unclear expectations, competing task demands).
Dr. DiGennaro Reed's research program has been particularly influential in identifying the conditions under which performance feedback is most effective for improving treatment integrity. Key findings include that immediate, specific, and behavior-based feedback is more effective than delayed, general, or outcome-based feedback; that graphic feedback (showing staff their own treatment integrity data over time) can be as effective as verbal feedback under some conditions; and that performance feedback systems need to be designed for sustainability — they must be feasible to implement consistently given real-world constraints.
The measurement dimension of treatment integrity is itself a technical challenge. Whole-interval, partial-interval, and momentary time-sampling systems each have different properties when applied to treatment integrity assessment. Discrete-trial-based integrity measures may focus on specific step accuracy, whereas naturalistic teaching integrity measures may require different observation strategies. Selecting the right measurement approach requires both technical knowledge and practical judgment about what is feasible in the clinical setting.
The most direct clinical implication of treatment integrity research is that behavior analysts should measure it — routinely, systematically, and using data that can inform clinical decisions. A BCBA who is seeing inconsistent client progress should consider treatment integrity as a hypothesis about the controlling variable before concluding that the treatment plan needs to be changed. Modifying a technically sound plan because of implementation problems compounds the problem: the new plan will also be implemented with low integrity if the systemic conditions that produced the first problem have not been addressed.
Treatment integrity data also inform supervision priorities. A supervisor who reviews treatment integrity data alongside outcome data can identify which specific procedural steps are most frequently implemented incorrectly, and can therefore target performance feedback with precision. Rather than telling a therapist to 'implement the procedure more carefully,' the supervisor can show data indicating that consequence delivery is accurate on 92% of trials but antecedent presentation is accurate on only 67% of trials — and can then provide targeted modeling and rehearsal for the specific deficit.
For clinical teams, treatment integrity monitoring creates accountability without surveillance. When all staff members collect treatment integrity data on each other's implementation as a routine part of practice — not as a punitive evaluation — the data become a shared resource for quality improvement. Staff who see their own treatment integrity data over time have a direct mechanism for tracking their own skill development. Staff who see the team's aggregate data understand how their performance contributes to the overall consistency of care.
The barriers that Conde and Reed address — staff time constraints, concerns about evaluation, data collection logistics — are important to acknowledge because they explain why treatment integrity measurement is so often omitted in practice. Building feasible, low-burden measurement systems is as important as understanding why measurement matters. A technically sophisticated integrity measure that takes 30 minutes to complete per session will not be implemented; a five-minute structured observation system that captures the critical implementation steps will.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
The BACB Ethics Code (2022) addresses treatment integrity through several provisions. Code 2.01 requires BCBAs to provide services that are effective and consistent with the research evidence. A BCBA who designs an evidence-based treatment plan and then allows it to be implemented with chronic low integrity is not fulfilling this obligation — the treatment being delivered to the client is not the evidence-based treatment that was prescribed. The ethics of treatment integrity are therefore not merely procedural; they go to the heart of the BCBA's obligation to provide effective services.
Code 2.19 requires that BCBAs ensure that others implement their recommendations accurately. This provision directly implicates treatment integrity monitoring: a BCBA cannot fulfill the obligation to ensure accurate implementation without having some mechanism for measuring whether implementation is accurate. The provision does not specify a particular measurement approach, but it does establish that monitoring is an ethical requirement, not merely a best practice.
Code 4.06 requires BCBAs to evaluate the effects of supervision, which includes evaluating the degree to which supervision produces accurate procedural implementation by supervisees. Treatment integrity data are one of the most direct measures of supervisory effectiveness: if supervisees are implementing procedures accurately, supervision is working; if they are not, something in the supervisory process — training, feedback, expectation-setting — needs adjustment.
The power dynamics of treatment integrity monitoring also have ethical dimensions. Staff members who implement treatment plans are in a position of less institutional power than the BCBAs and supervisors who evaluate them. Treatment integrity monitoring systems need to be designed in ways that are transparent (staff know what is being measured and why), fair (data are used to inform training and support, not exclusively to punish), and supportive (monitoring creates opportunities for coaching and feedback, not just documentation of errors). Systems that are experienced as punitive will produce avoidance behavior from staff rather than improvement in implementation.
Assessing treatment integrity requires decisions at several levels. At the measurement design level, the BCBA must determine what to measure (which procedural steps are most critical to treatment effects), how to measure it (direct observation, permanent product review, video review), how frequently to measure it (every session, weekly, monthly), and who will measure it (the supervisor, a peer, a trained independent observer). Each of these decisions involves tradeoffs between measurement precision, feasibility, and cost.
At the clinical decision-making level, treatment integrity data inform three primary decisions: whether the treatment plan is being implemented as designed, whether performance feedback is needed for specific staff or specific procedural steps, and whether observed treatment effects (or their absence) are attributable to the treatment itself or to implementation variability. A rule of thumb from the OBM literature suggests that treatment integrity above 80% is generally considered acceptable for most ABA procedures, though specific procedures (particularly those involving safety or behavior reduction) may warrant higher criteria.
When treatment integrity is low, the decision tree involves identifying the specific variables controlling the implementation deficit. Is the staff member failing to implement the procedure correctly because they do not know what correct implementation looks like (a training deficit)? Because the correct implementation is effortful or time-consuming and competing task demands make it difficult to complete (an environmental deficit)? Because accurate implementation has not been reinforced or inaccurate implementation has not been corrected (a consequence deficit)? Each of these has a different solution, and applying the wrong solution wastes time while the implementation problem persists.
For managers, treatment integrity data also inform hiring, onboarding, and ongoing training decisions. A clinic where treatment integrity data are systematically collected and reviewed can identify which new staff members need additional training before independent implementation, which experienced staff members have developed procedural drift that requires corrective feedback, and which procedures are systemically difficult to implement accurately — possibly indicating that the written protocol needs clarification.
If you are currently not collecting treatment integrity data in your practice, the starting point is not to design an elaborate measurement system but to identify the one or two procedures on each client's program that have the greatest impact on treatment outcomes and measure implementation of those procedures consistently. Perfect measurement of everything is not achievable; consistent measurement of the most critical elements is.
For supervisors, treatment integrity monitoring should be integrated into the regular supervision cycle, not added as a separate burden. Every supervision observation is an opportunity to collect treatment integrity data. Every performance feedback session should reference specific treatment integrity data rather than general impressions. Over time, this integration makes treatment integrity monitoring the default mode of supervision rather than an additional task.
For clinical directors and program managers, the systemic approach that Conde and Reed advocate requires examining the organizational conditions that either support or undermine accurate implementation. This means looking beyond individual staff performance to the systems — training, protocols, supervision structure, workload, feedback mechanisms — that determine whether accurate implementation is feasible and reinforced. A clinic that consistently struggles with treatment integrity across staff members has a systems problem, not a personnel problem, and the solution is organizational rather than individual.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Treatment Integrity Matters! — Kerry Ann Conde · 1 BACB Supervision CEUs · $0
Take This Course →All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.