By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts
Standard competency-based approaches identify skills that trainees must demonstrate but often present them as a flat list without an explicit instructional sequence. The job-based model adds two layers: it organizes competencies into a developmental phase structure that reflects the actual architecture of BCBA practice, and it ties phase transitions to direct assessment rather than hour accumulation. The result is a system where the supervisor's instructional choices are guided by a clear logic about what the trainee needs to do next and why, rather than what happens to be clinically convenient in the moment.
Conduct an entry-point assessment in the first supervision session using brief probes of phase-one skills: have the trainee demonstrate a DTT procedure, describe their data collection experience, and show you examples of their session notes. If phase-one skills are fluent — meaning consistent and accurate across a few demonstrations — confirm with a direct observation in session and then proceed to assess phase-two readiness. The goal is not to restart training the trainee doesn't need; it is to find the actual baseline so you are not filling gaps that don't exist while missing ones that do.
Phase duration should be determined by competency attainment, not calendar time. That said, in practice, trainees who begin fieldwork with strong RBT backgrounds often move through phase one in 2-3 months. Phase two — developing the scientific-practitioner repertoire — is typically the longest phase, often spanning 6-12 months, because data-based clinical reasoning takes time to develop across varied cases. Phase three begins as the trainee approaches the end of their hours and should involve progressively independent ownership of cases. Supervisors should resist the temptation to delay phase advancement because of scheduling comfort or caseload needs.
For data interpretation, BST begins with the supervisor explicitly describing what a data path showing a decelerating trend looks like, what constitutes a level change versus a trend change, and what decision rules trigger a program modification. Modeling means the supervisor works through an actual data set from a current case, narrating their reasoning aloud. Rehearsal means the trainee works through a different data set while the supervisor observes. Feedback addresses specific errors — for example, 'You identified the trend direction correctly but did not consider the variability in the last three data points before making a decision.' This process is repeated across multiple data sets until the trainee meets criterion.
Frame phase advancement as contingent on skill demonstration, not time. When introducing the model at the start of fieldwork, establish the criteria clearly: 'You will move to phase two when you consistently demonstrate these specific skills at these levels.' When a trainee is not advancing, return to the criteria rather than making a judgment about the trainee's effort or attitude. Ask: 'What is the specific skill that is not yet at criterion? What additional practice or instruction would help?' This keeps the focus on behavior and skill-building, not on evaluation of the person, and it gives both supervisor and trainee a clear target.
Yes, with adaptations. Phase-one skill assessment is more challenging remotely because procedural integrity observation requires seeing the trainee's implementation with clients. Synchronous video observation during client sessions is the most direct solution. Trainees can also record sessions for asynchronous review, though this shifts feedback timing. Phase-two and phase-three skills — data review, program writing, clinical reasoning — are actually well-suited to telehealth because they can be conducted through screen share, collaborative document review, and structured discussion. Supervisors should document which competencies have been assessed via direct observation versus remote review.
Use a structured rubric that separates the components of a high-quality FBA: was the interview comprehensive and did it address setting events and MOs as well as antecedents and consequences? Was the direct observation method appropriate to the behavior topography? Is the function-based hypothesis specific and testable? Is the treatment plan directly derived from the hypothesis? Score each component independently and provide specific written feedback. Then ask the trainee to revise based on your feedback rather than rewriting yourself. The revision process is where the learning happens — seeing your corrections does not build the trainee's skill the way generating a corrected version under guidance does.
Maintain a supervision log that records, for each session: the phase the trainee is currently in, which competencies were targeted, which assessment methods were used, the trainee's performance level, and the feedback provided. Keep completed competency checklists for each phase with dates and assessment conditions. Retain samples of the trainee's written work with your feedback. If the BACB audits a supervision relationship, this documentation demonstrates that supervision was active and systematic rather than informal. It also protects the supervisor if questions arise later about the quality of the training they provided.
The course specifically addresses supervisor motivation as a variable. Assessing your own MOs means honestly examining what currently functions as a reinforcer for your supervision behavior: is it clinical case discussion, administrative efficiency, or genuine investment in trainee skill development? It also means identifying competing contingencies — the demands of a full caseload, administrative tasks, and billing requirements that compete with supervision quality. Supervisors who identify these competing contingencies can design structural solutions: protecting supervision time in their calendar, using structured protocols that make sessions efficient, and tracking their own supervision behavior the way they would track a clinical target.
Late discovery of significant gaps is a supervision problem as much as a trainee problem — it indicates the structured assessment process was not in place or was not generating accurate information. When gaps are found late, the supervisor faces a genuine ethical tension between the trainee's timeline and the public protection function of the certification process. The practical response is to document the gap clearly, design targeted remediation with explicit criteria, and communicate honestly with the trainee about what achieving competency will require. If the hours will run out before competency is demonstrated, the supervisor must consider whether they can ethically sign off — Code 4.05 requires that the supervision provided was adequate, not that it was attempted.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Teaching to the Job: a Systematic Approach to BCBA Supervision — Nicole Stewart · 1 BACB Supervision CEUs · $15
Take This Course →1 BACB Supervision CEUs · $15 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.