Preference assessment training via self-instruction: A replication and extension
A short self-instruction packet plus one feedback list trains staff to run paired-stimulus preference assessments at 100% fidelity.
01Research in Context
What this study did
Shapiro et al. (2016) built a self-instruction manual for paired-stimulus preference assessments.
Staff read the packet, then tried the assessment with a learner. A supervisor gave a short feedback list to anyone who missed steps.
The team used a multiple-baseline design across participants to see if the package worked.
What they found
Most staff hit 100% fidelity after the manual alone.
The few who needed help got there after one quick feedback sheet.
Everyone kept the skill the next week.
How this fits with other research
Izquierdo-Gomez et al. (2015) did something close one year earlier. They used a short video with embedded prompts instead of a paper manual. Both studies reached mastery, so you can pick the medium your staff like best.
Ruppel et al. (2023) took the same idea online. They replaced the paper manual with emailed slides and gave live Zoom feedback. Staff still hit 90% plus fidelity, showing the package moves well to telehealth.
Ausenhus et al. (2019) added real-time coaching through a webcam. That extra layer may help if staff struggle after the paper or video route.
Why it matters
You now have a menu: paper, video, or Zoom. Start with the cheapest manual. If scores are clean, you are done. If not, add one round of feedback or switch to live coaching. Either way you protect client time and agency money while keeping assessment quality high.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Email your new hires the manual, watch one trial, and hand out the feedback sheet if any step is missed.
02At a glance
03Original abstract
We examined the effects of a self-instructional and feedback package on participants' implementation of a paired-stimulus preference assessment. Specifically, in Experiment 1, we used a multiple baseline design across participants to replicate and extend the results of Graff and Karsten (2012) by evaluating the effectiveness of their self-instructional manual. A majority of the participants (i.e., 5 of 7 undergraduate students and 4 of 5 in-home behavior technicians) achieved mastery with the self-instructional package. The remaining participants met the mastery criterion after brief modeling and feedback sessions. In Experiment 2, we identified the most effective component of the feedback condition from Experiment 1 when a self-instructional package was not sufficient. Brief feedback sessions in which participants received a list of the targeted responses plus information regarding accuracy of emitted responses was sufficient for 5 of 6 participants to achieve mastery.
Journal of Applied Behavior Analysis, 2016 · doi:10.1002/jaba.339