This guide draws in part from “Training School Staff - Part 3: Implementing BST & Evaluating Training Effectiveness” by Katie Conrado, BCBA, M.Ed. in Special Education, CA Credentialed Teacher (BehaviorLive), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →Behavior Skills Training (BST) is the most empirically validated method for teaching behavioral procedures to staff and caregivers. Its four components — instruction, modeling, rehearsal, and feedback — are grounded in behavioral learning principles and produce higher initial acquisition and better maintenance than instruction-only or observation-only approaches. In school settings, where behavior analysts typically work through other staff rather than delivering direct services, the quality of BST implementation is the primary determinant of whether students receive the behavior support they need.
Katie Conrado's Part 3 of the school staff training series focuses on the implementation and evaluation dimensions of BST — the components that determine whether trained skills actually generalize to the classroom and maintain over time. Instruction and modeling can be delivered in a professional development session. Rehearsal, feedback, fidelity monitoring, and collaborative coaching structures require sustained engagement across the school year, and this is where many behavior support programs lose their clinical impact.
The school context presents specific implementation challenges that generic BST literature does not always address. School staff are implementing behavior support procedures while simultaneously managing 20 to 30 other students, adhering to instructional schedules, navigating IEP requirements, and working within institutional hierarchies that affect how they receive and respond to professional feedback. BST delivered without attention to these ecological variables will not transfer from the training room to the classroom.
For school BCBAs, this course addresses a core clinical responsibility: ensuring that the behavior support plans developed from functional assessments are actually implemented as designed. A technically excellent BSP that staff cannot or do not implement with fidelity is clinically worthless. Implementation science is not peripheral to behavior analysis — it is the bridge between the analytic work and the student outcomes that the analytic work is meant to produce.
The BST literature originated in the training of direct care and intervention staff in human services settings. Early research published in JABA demonstrated that the standard training model of the era — lecture and written materials — produced poor implementation fidelity, while BST produced substantially higher fidelity and better generalization to real-world implementation conditions. These findings have been replicated across populations, roles, and settings, and BST is now the standard recommendation in BACB supervision training materials and in implementation science literature on behavioral interventions in schools.
The evolution of BST research has moved beyond establishing its superiority over instruction-only approaches to refining how its components are delivered for specific populations, roles, and skill targets. For school staff training, relevant refinements include: role plays that use realistic school scenarios with accurate contextual constraints, feedback delivered in classroom settings rather than training rooms, and coaching structures that maintain staff performance over school-year timelines rather than just at post-training assessment.
Fidelity monitoring in schools has a dual function: it provides data for evaluating staff training effectiveness and it generates the implementation data that the behavior analyst needs to interpret student outcome data meaningfully. When a student is not responding to a behavior support plan, the first clinical question is whether the plan is being implemented as designed. Without fidelity data, this question cannot be answered — and without the answer, the BCBA cannot determine whether the intervention is ineffective or simply untested.
Collaborative coaching has emerged as a preferred model for sustaining staff performance in school settings because it positions the behavior analyst as a resource and problem-solver rather than an evaluator. Staff who experience performance monitoring as supportive rather than surveillance-like show higher maintenance of trained skills, greater willingness to disclose implementation difficulties, and higher engagement with ongoing training activities. The relational structure of coaching is not soft — it is a systematic approach to maintaining the discriminative stimuli and reinforcement contingencies that support sustained implementation.
Building BST plans tailored to specific school roles requires role analysis — identifying the exact procedures each staff member is responsible for implementing, the contexts in which they will implement them, and the constraints that will affect their implementation. A special education teacher implementing a token economy across a full classroom has different training needs than a 1:1 paraprofessional implementing a differential reinforcement procedure during lunch. The same skill — delivering contingent reinforcement — requires different training scenarios and different implementation supports depending on the role context.
Modeling is the BST component most frequently under-invested in school staff training. Written procedures and verbal instruction can describe what to do; modeling shows what it looks, sounds, and feels like in the actual implementation context. Effective modeling in school settings means demonstrating the procedure in the classroom environment, with a real student (or a confederate in a realistic role play), under conditions that match the actual implementation demands. Watching a video demonstration or a modeling exercise in a conference room produces lower transfer than in-situ modeling.
Descriptive feedback is the component that differentiates BST from performance review. Descriptive feedback specifies the observed behavior and its functional relationship to outcomes without evaluative framing: 'When you narrated what the student was doing as you delivered the token, he looked at you 4 of 5 times — that's different from last week when he averaged 2 of 5. The narration seems to be increasing attending.' This is different from evaluative feedback ('good job today') and different from corrective feedback delivered without behavioral specificity ('you need to be more consistent'). Descriptive feedback teaches the staff member to observe their own behavior as data — which is the foundation of sustainable self-monitoring.
Data-based coaching structures give the behavior analyst and the staff member shared information to work from. When both parties can see the fidelity trend over time — and when that trend is framed as a collaborative problem rather than a performance evaluation — conversations about implementation challenges are more productive and more likely to result in genuine problem-solving. Coaching conversations should move from data review to collaborative hypothesis generation to action planning in each meeting.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
BACB Ethics Code 2.01 requires BCBAs to provide competent services, which in consultation contexts means ensuring that the procedures they design are actually implemented with adequate fidelity to produce the expected outcomes. A school BCBA who develops a BSP and trains staff in a single professional development session, then monitors implementation only through student outcome data, is not meeting the implementation oversight standard that competent behavior-analytic consultation requires.
Code 2.14 addresses the responsibility to discontinue or modify services when treatment is not producing benefit. In school consultation contexts, this requires distinguishing between an ineffective intervention and an under-implemented one. Modifying a BSP without first confirming implementation fidelity may result in abandoning an effective procedure because it was not implemented correctly rather than because it was clinically inadequate. Fidelity monitoring is the assessment tool that makes this distinction possible — it is an ethics compliance mechanism, not just a quality assurance add-on.
Code 5.06 requires supervisors to provide feedback to supervisees in a timely, accurate, and constructive manner. When the school BCBA is providing performance coaching to school staff, these obligations apply: feedback should be delivered close in time to the observed behavior, should accurately describe what was observed, and should be designed to produce improvement rather than simply to document deficiency. Punitive feedback that suppresses staff willingness to be observed or to disclose implementation problems undermines the entire coaching system and creates indirect harm to students.
Student confidentiality and privacy obligations extend to fidelity monitoring and coaching activities. Data collected on staff implementation behavior should be maintained appropriately and not shared beyond those with a professional need to know. When coaching data indicate that a staff member is consistently failing to implement a behavior support plan and the student's welfare may be at risk, there is an obligation to escalate through appropriate channels — which may include the school principal or special education coordinator — following the professional reporting structures of the school system.
Designing fidelity monitoring tools starts with a task analysis of the target procedure. Each observable step in the implementation sequence becomes an item on the fidelity checklist. The checklist should be observable (each item can be independently verified by a trained observer), complete (all essential components are captured), and actionable (each item that falls below criterion generates a specific coaching response). Generic implementation quality checklists that are not derived from the specific procedure being monitored produce data that cannot drive targeted feedback.
Deciding how often to monitor fidelity should be calibrated to the stage of staff training. In the immediate post-training phase, frequent observation (two to three times per week) provides the data needed to confirm acquisition and identify early drift. As fidelity stabilizes at criterion, observation can be reduced to a maintenance schedule. If fidelity drops below criterion at any maintenance check, the protocol returns to a more intensive schedule. This is the same logic as fading supervision intensity — based on performance data, not elapsed time.
Group supervision structures for school staff should be designed to replicate the learning conditions of individual BST. Presenting a case scenario and asking the group to discuss implementation is an accuracy-level activity. Having staff members role-play the procedure with each other while the behavior analyst provides immediate feedback is a fluency-level activity. Group settings also allow peer modeling — staff members who have achieved high fidelity can model for those who are still developing, which adds modeling exemplars and peer-delivered feedback to the learning environment.
Decision-making about when to modify a behavior support plan versus when to intensify implementation support requires a clear decision rule established in advance. The proposed rule: if fidelity has been at or above 80% for at least two consecutive observation periods and the target behavior is not responding, the intervention warrants clinical modification. If fidelity is below 80%, implementation support should be intensified before any clinical modification is considered. This rule prevents the common error of abandoning effective procedures due to poor implementation.
For every BSP you develop in school settings, create a matching implementation fidelity checklist before you begin training. The checklist forces you to be precise about what you are actually asking staff to do — and frequently reveals that the written procedure is less clear than you thought it was. Ambiguities in the BSP that are invisible when writing become obvious when trying to operationalize them as checkable implementation steps.
Plan for classroom-embedded feedback from the start. If your BST plan does not include in-classroom observation and feedback in the natural implementation context, you are relying on transfer that the research does not support. This may require coordination with school administrators to protect observation time, and that conversation should happen before training begins, not as an afterthought when fidelity data indicate staff need additional support.
Build follow-up coaching into the standard service delivery model rather than treating it as a remediation response. Staff who receive scheduled, collaborative coaching sessions — even brief ones — maintain implementation fidelity at higher rates than those who receive coaching only when problems are detected. The scheduled structure also signals that monitoring is a routine component of the support relationship, not a consequence of poor performance, which changes the interpersonal context of observation.
When coaching conversations surface systemic barriers to implementation — schedules that prevent consistent procedure delivery, IEP mandates that conflict with behavioral recommendations, insufficient paraprofessional coverage to implement the plan as designed — treat these as clinical problems that require systems-level problem-solving. The school BCBA who accepts implementation constraints without attempting to change them is allowing organizational variables to determine student outcomes rather than clinical ones.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Training School Staff - Part 3: Implementing BST & Evaluating Training Effectiveness — Katie Conrado · 1 BACB Supervision CEUs · $24.99
Take This Course →We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
280 research articles with practitioner takeaways
258 research articles with practitioner takeaways
244 research articles with practitioner takeaways
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.