An Evaluation of a Staff Management Strategy to Minimize Reactivity in Procedural Fidelity of Intervention Implementers
Surprise feedback after silent checks keeps staff accurate when no supervisor is in the room.
01Research in Context
What this study did
Reyes et al. (2025) tested a simple twist on feedback. They watched staff run social-skills sessions, but the staff never knew when the check happened. After the silent visit, the supervisor gave feedback. The study used a multiple-baseline design across three staff members.
What they found
Every staff member hit higher fidelity for positive social engagements after the surprise feedback. The gains showed up only after the feedback started, not before. The team kept the new level even when no one was watching.
How this fits with other research
Akers et al. (2024) ran a similar test with BCBA trainees, but the feedback came through Zoom after remote sessions. Both studies show the same core idea: feedback works best when staff think no one is watching.
Ruppel et al. (2023) added a quick Zoom feedback call after standard preference-assessment training. Their results match Reyes et al. (2025): a short feedback session fixes small errors and keeps fidelity high.
Shapiro et al. (2016) used a self-instruction manual plus brief feedback for preference assessments. The older paper and the new one both find that feedback, not more training, is the key final step.
Why it matters
You can stop doing daily in-room coaching. Slip in unannounced, record a five-minute probe, and give feedback later. Staff learn you might check at any time, so they stay sharp even when the door is closed. One quiet visit per week can protect treatment integrity without extra pay or prep.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one staff member, watch a 5-minute video from last week, and email two specific praise points plus one fix.
02At a glance
03Original abstract
The purpose of the current study was to evaluate reactivity to observation and increase procedural fidelity in observer absent conditions by delivering feedback to participants following observer absent observation sessions. Two of the three participants increased their rate of positive social engagements above criterion level following behavioral skills training (BST) and feedback in the observer present condition, but this increase was not seen in the observer absent condition. After the delivery of feedback in the observer absent condition, the participants exhibited an increase procedural fidelity. The third participant responded above criterion in the observer present condition during baseline and so went straight into feedback in the observer absent condition following BST and showed an increase in performance also. Responding during generalization probes suggested that feedback should be delivered in all contexts in which procedural fidelity is expected of implementers.
Journal of Organizational Behavior Management, 2025 · doi:10.1080/01608061.2024.2326020