By Matt Harrington, BCBA · Behaviorist Book Club · April 2026 · 12 min read
Behavior Skills Training is the gold standard of staff training in applied behavior analysis, and for good reason — its four-component structure reliably produces faster skill acquisition and better fidelity than instruction-only approaches. But BST, when used as the sole training strategy regardless of context, has limitations that become apparent when ABA organizations try to scale. Technicians who demonstrate mastery on BST-assessed skills during training sometimes show inconsistent implementation weeks later. Staff who perform correctly in controlled role-plays make errors when confronted with novel student behavior. Training time invested does not always translate proportionately to improved client outcomes.
This course argues that these limitations are not failures of BST as a method — they are failures to apply the broader science of learning design to staff training systems. Instructional design, the field that studies how to optimize learning environments for transfer and retention, has identified principles that complement and extend BST. Organizational Behavior Management (OBM) provides frameworks for ensuring that organizational contingencies support trained behaviors once training ends. Human Performance Technology (HPT) offers tools for diagnosing whether performance problems are actually training problems or whether they arise from other organizational variables.
The clinical significance is straightforward: the quality of ABA service delivery depends on how well staff can implement behavior support procedures in real environments, under real conditions, with real client variability. Training systems that produce short-lived competence demonstrations without building genuine fluency and generalization ultimately fail clients. BCBAs who supervise staff training programs have a professional responsibility to apply the full scope of evidence on learning and performance to that responsibility.
Instructional design as a field emerged from cognitive and behavioral psychology in the mid-twentieth century, with Gagné's Nine Events of Instruction providing an early and enduring framework. Gagné's model describes a sequence of instructional conditions — gaining attention, informing learners of objectives, stimulating recall, presenting content, providing guidance, eliciting performance, providing feedback, assessing performance, and enhancing retention and transfer — that correspond to the cognitive and behavioral processes involved in learning. Each event addresses a different aspect of the learning experience, and omitting events predictably produces gaps in learner performance.
BST maps well onto several of Gagné's events but does not explicitly address all of them. In particular, Gagné's emphasis on enhancing retention and transfer — through spaced practice, varied examples, and explicit generalization training — is often absent from BST protocols as implemented in ABA organizations. This omission is consequential: without systematic attention to retention and transfer, trained skills decay faster and generalize less reliably to naturalistic service conditions.
Kirkpatrick's four-level evaluation model — reaction, learning, behavior, and results — provides a framework for evaluating training effectiveness that is highly relevant to ABA supervision contexts. Most ABA organizations evaluate training at the learning level (did the trainee pass the post-training assessment?) without evaluating behavior (did the trainee implement the skill correctly in the natural environment?) or results (did client outcomes improve?). Organizations that evaluate only at the learning level are systematically blind to the most clinically important questions about their training systems.
OBM contributes the insight that individual training outcomes are embedded in organizational performance systems. A technician who is trained to implement extinction correctly will not maintain that implementation if the organizational environment — supervisor feedback patterns, peer norms, administrative priorities — inadvertently reinforces alternative behavior. OBM analysis asks what contingencies are operating in the natural work environment and whether they support or undermine the behaviors training was designed to produce.
BCBAs who supervise technician training programs can apply instructional design principles concretely to improve training outcomes. The most actionable addition from Gagné's framework is systematic attention to retention and transfer — the events most commonly absent from BST-based training.
Enhancing retention involves spaced practice (revisiting trained skills at intervals after initial training rather than treating certification as the endpoint), cumulative review (building new training on explicit recall of prior skills rather than treating each training module as independent), and varied practice (using multiple examples and client scenarios rather than a single representative case). These additions do not require a complete training system overhaul; they can be added to existing BST protocols as supplementary components.
Enhancing transfer involves analyzing the gap between training conditions and natural work conditions and designing training that bridges it deliberately. If technicians are trained in a quiet clinical space using scripted role-plays, the training conditions are maximally different from a loud classroom with unpredictable student behavior. Progressively more naturalistic practice conditions, graduated real-client contact with coaching support, and explicit instruction on how to adapt trained procedures to novel situations all improve transfer without requiring indefinite supervised practice.
The Kirkpatrick evaluation perspective changes how BCBAs design their ongoing supervision after training ends. Rather than viewing post-training supervision as primarily supportive or relationship-based, applying Kirkpatrick level 3 (behavior) and level 4 (results) evaluation means continuing to ask: is this technician actually implementing the trained procedures correctly in the field, and is that implementation producing the expected client outcomes? These questions produce more useful clinical information than periodic check-ins about job satisfaction.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
BACB Ethics Code (2022) Section 2.11 requires that training programs result in genuine competence in the skills trained. A BCBA who delivers BST, obtains a passing role-play performance, and considers the training complete — without evaluating whether the trained skills transfer to and maintain in actual service delivery conditions — has met the technical minimum of training delivery without fulfilling the substantive obligation to build genuine competence.
Section 4.07 addresses the responsibility to take on supervisory and training responsibilities only when you have the capacity and competence to fulfill them. For BCBAs responsible for technician training, this includes competence in training design — knowing not just how to deliver BST but how to design training systems that produce durable, generalizable skill outcomes. The growing evidence base on instructional design and OBM in ABA training contexts makes competence in these areas increasingly relevant to meeting Section 4.07 obligations.
Staff training quality has direct client welfare implications that create a link between training system design and the Code's core client protection provisions. Clients whose technicians received inadequate training — training that produced short-lived or non-generalizing competence — are exposed to higher risk of implementation errors, program fidelity failures, and delayed skill acquisition. BCBAs who design the training systems those technicians go through are responsible for the adequacy of those systems.
OBM's organizational analysis lens also raises ethical questions about whether BCBAs are positioned to influence the organizational contingencies that affect trained behavior. When organizational systems reinforce behavior that conflicts with trained procedures — for example, rewarding speed of service delivery in ways that reduce implementation fidelity — BCBAs have an obligation to raise these conflicts rather than accepting them as fixed constraints.
Applying Kirkpatrick's evaluation framework to ABA technician training starts with identifying what is currently being measured and what is not. Most organizations have reaction data (exit surveys), some have learning data (post-training assessments), and few have systematic behavior data (fidelity monitoring in natural conditions) or results data (client outcome analysis linked to training quality). Identifying which levels are currently absent is the first step toward a more complete evaluation system.
For instructional design audits of existing training programs, a useful diagnostic question is: where do technicians most commonly make implementation errors after training, and what does that error pattern tell us about the training design? Errors concentrated in the first two weeks post-training suggest a transfer problem. Errors that appear only under specific conditions (particular client behavior, particular session times) suggest a generalization gap. Errors on specific procedural steps suggest those steps were undertaught or undertrained in role-play.
OBM performance analysis asks four questions before attributing performance problems to training gaps: Does the person know what they are expected to do? Do they have the skills to do it? Are organizational consequences supporting the desired behavior? Are there environmental obstacles preventing performance? When performance problems reflect antecedent or consequence issues rather than skill deficits, additional training is not the solution. Adjusting feedback systems, clarifying expectations, or removing environmental obstacles is more effective and more efficient.
Decision rules for training system revision should include trigger criteria: specific outcome measures that, when not met within a defined timeframe, prompt a training design review. A trigger might be average fidelity below 85 percent at four weeks post-training, or client outcome data showing no measurable change despite consistently scheduled sessions. These triggers make training system evaluation data-driven rather than anecdote-driven.
For BCBAs responsible for technician training, the immediate practical step is to add a post-training fidelity assessment at four to six weeks using naturalistic observation rather than relying solely on training-setting performance data. This single addition creates Kirkpatrick level 3 data and reveals whether your training program is producing transfer or only acquisition.
For BCBAs interested in deepening their instructional design knowledge, Gagné's Nine Events and Kirkpatrick's evaluation model are accessible frameworks that translate directly to ABA training contexts without requiring extensive additional study. Reading Gagné's Conditions of Learning and applying the framework to one current training module as a design audit is a manageable development activity.
For organizations reviewing their entire technician training infrastructure, the HPT diagnostic question — is this a training problem, or an antecedent or consequence problem? — should precede any decision to add training components. Many technician performance problems that appear to be training gaps are actually clarity problems (technicians do not know specifically what is expected), feedback problems (technicians do not receive timely information about their performance), or reinforcement problems (correct implementation produces no organizational recognition while alternative behavior does). Adding training to these problems does not fix them.
Finally, BCBAs who develop genuine competence in instructional design and OBM have an increasingly rare and valuable expertise within ABA. Organizations seeking to systematically improve their training outcomes and build scalable quality assurance systems need practitioners who can apply the full science of learning and performance — not just BST protocols — to complex organizational training challenges.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Don't Just Train - Design: Elevating ABA Supervision Through OBM & Instructional Design — Shannon Biagi · 1.5 BACB Supervision CEUs · $0
Take This Course →All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.