These answers draw in part from “Training School Staff - Part 3: Implementing BST & Evaluating Training Effectiveness” by Katie Conrado, BCBA, M.Ed. in Special Education, CA Credentialed Teacher (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →The four BST components are instruction, modeling, rehearsal, and feedback. Instruction provides the trainee with a conceptual and procedural description of the target skill — what to do, why to do it, and what correct performance looks like. Without instruction, the trainee has no basis for discriminating correct from incorrect performance. Modeling shows the skill being performed correctly in a realistic context, adding the visual and auditory stimulus properties that instruction alone cannot convey. Rehearsal provides the opportunity for skill practice under controlled conditions, building the muscle memory and contextual familiarity needed for fluent field implementation. Feedback closes the loop by confirming correct performance or correcting errors immediately — before incorrect forms of the skill have the opportunity to strengthen. Research consistently shows that omitting any single component — particularly rehearsal and feedback — substantially reduces fidelity in natural settings.
Start with a task analysis of the target procedure: break it into every observable component step. Each step becomes one item on the checklist, written in observable language (e.g., 'delivers specific verbal praise within 3 seconds of target behavior' rather than 'reinforces appropriately'). The checklist should be long enough to capture all essential components and short enough to complete during a typical classroom observation without missing live implementation. Score each item as implemented or not implemented — avoid partial credit ratings that complicate data interpretation. After each observation, review the checklist with the staff member item by item, noting which components were present and which need additional practice. The checklist becomes the shared data reference that makes feedback specific, the coaching conversation organized, and progress measurable over time.
Corrective feedback is most effective when it is descriptive rather than evaluative, specific rather than general, and proximal in time to the observed behavior. Describe what you observed: 'In the last five minutes, I noticed the reinforcer was delivered after a 10-second delay from the target behavior on three occasions.' Connect the observation to the procedure rationale: 'The delay reduces the stimulus control the reinforcer has over the target behavior.' Then offer a clear alternative: 'Let's practice delivering it within 3 seconds — I'll prompt the first few in tomorrow's observation.' Separating observation from evaluation — describing behavior rather than judging the person — keeps the feedback focused on what can be changed rather than on what is wrong with the staff member. Feedback delivered this way is more likely to be received, acted on, and associated with a collaborative rather than adversarial relationship.
Data-based coaching follows a recurring cycle: observe implementation and record fidelity data, review data with the staff member collaboratively, identify patterns (which components are stable, which are drifting, which have never reached criterion), generate hypotheses about barriers to consistent implementation, agree on an action plan, and schedule the next observation. In a school year timeline, this cycle typically runs on a two to four week rotation for each staff member being coached. The data record provides both parties with an objective basis for the conversation and makes progress visible over time — a critical source of reinforcement for the behavior analyst's investment in coaching and for the staff member's sustained implementation effort. Make the trend graphs accessible and easy to interpret so the data function as feedback for both parties.
Role plays should be as ecologically valid as possible: use scenarios drawn from the actual student profiles on the staff member's caseload, set the role play in the physical classroom space when possible, include the contextual variables that will be present during real implementation (background noise, instructional demands, other students needing attention). The behavior analyst or a trained confederate plays the student role, including realistic variations in the student's behavior — not just the ideal presentation assumed in the BSP, but the messier real-world versions including partial responses, off-task behavior, and escalation. After the rehearsal, provide immediate feedback before the scenario is repeated with corrections. Multiple iterations of rehearsal with successive feedback produce substantially higher fidelity than a single practice trial followed by general discussion.
The decision rule is straightforward: assess fidelity before modifying the intervention. If fidelity is below 80% (or whatever criterion was established in the plan), the intervention has not been adequately tested. Modifying a BSP under these conditions risks abandoning an effective procedure because it was not implemented correctly. If fidelity is at or above criterion for at least two consecutive observation periods and the student's target behavior is not responding as predicted, the clinical rationale for the intervention should be reviewed — was the functional assessment accurate, is the reinforcer still functional, has the environment changed in ways that affect the behavioral contingencies? The clear sequence is: confirm adequate implementation, then evaluate clinical effectiveness.
Group supervision for school staff is most effective when it includes active learning activities rather than primarily didactic presentation. Structure each session around a specific skill target, deliver brief instruction on that skill, then engage the group in structured practice: role plays in pairs with the behavior analyst rotating to provide feedback, case scenario discussions that require each participant to produce a written or verbal response before group sharing, or data interpretation exercises using real or realistic data sets. Video modeling of correct and incorrect implementation provides efficient group exposure to modeling exemplars. The behavior analyst's feedback during group practice should be specific and occur during the role play, not just afterward. Group settings allow for more modeling exemplars than individual training alone and provide peer feedback as an additional source of corrective information.
The most common breakdown points are: inadequate rehearsal time (training compressed into a single professional development session with no follow-up practice), feedback delivered too late after observation to be connected to the specific behaviors observed, role plays that are too idealized to prepare staff for the real implementation challenges, fidelity monitoring that is reactive (triggered by student regression) rather than proactive (scheduled regardless of apparent outcome), and coaching conversations that are not grounded in specific data. Prevention requires planning: build rehearsal and in-classroom feedback into the training protocol from the start, schedule fidelity observations before training even begins to establish a baseline, create role play scenarios that include realistic complications, and design the coaching structure before the initial training session so it is treated as standard practice rather than remediation.
IEP goals establish the socially significant outcomes that behavior support procedures are designed to produce. BST training is the implementation mechanism — it ensures that the IEP-mandated procedures are actually delivered as designed. The connection should be explicit in staff training: each BST session should reference the specific IEP goals that the trained procedure serves, and fidelity monitoring should track implementation of the procedures that the IEP specifies. Student IEP progress monitoring data then serve as the ultimate outcome measure for the BST program: are the trained staff producing the implementation conditions that the IEP goals require? When student progress data indicate goals are not being met, fidelity data are the first step in the clinical investigation — before conclusions about the adequacy of the goals or procedures themselves are drawn.
Performance evaluation is unidirectional — the evaluator assesses the staff member against a standard and records a rating. It typically occurs at low frequency (annual or semi-annual), uses evaluative language, and is tied to employment consequences. Collaborative coaching is bidirectional — both the behavior analyst and the staff member contribute to identifying what is working, what needs adjustment, and what support is needed. It occurs at high frequency, uses descriptive language, and is tied to professional development rather than employment status. The distinction matters functionally: staff who experience coaching as evaluative reduce transparency — they perform differently when observed and are less likely to disclose implementation problems. Staff who experience coaching as collaborative seek out observation and consultation. The coaching structure should be designed to maintain the discriminative stimuli that signal support rather than surveillance.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Training School Staff - Part 3: Implementing BST & Evaluating Training Effectiveness — Katie Conrado · 1 BACB Supervision CEUs · $24.99
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
280 research articles with practitioner takeaways
258 research articles with practitioner takeaways
244 research articles with practitioner takeaways
1 BACB Supervision CEUs · $24.99 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.