Practitioner Development

Evaluating Artificial Intelligence on the Efficacy of Preference Assessments for Preservice Speech-Language Pathologists.

Griffen et al. (2024) · Journal of Developmental and Physical Disabilities 2024
★ The Verdict

An AI coach lets new SLPs run faster, more accurate MSWO assessments than paper manuals alone.

✓ Read this if BCBAs who train staff or students to conduct preference assessments in clinics or schools.
✗ Skip if Practitioners looking for child-level reinforcer validation data.

01Research in Context

01

What this study did

Griffen et al. (2024) tested an AI coach that teaches new speech-language pathologists how to run a multiple-stimulus-without-replacement (MSWO) preference assessment. Five preservice SLPs first learned the steps from a paper manual. Then they used an AI app that watched them work and gave live tips. The team tracked how long each assessment took and how many steps each trainee got right.

All participants worked with children who had intellectual or developmental disabilities. The study ran in the children's regular therapy rooms. No extra staff stood behind the trainees; the AI handled all coaching.

02

What they found

The AI group finished the MSWO faster and made fewer errors than their own baseline with the paper manual. Social-validity forms showed the new SLPs liked the AI coach and felt more confident. The gains held across different children and different toy sets.

In short, swapping paper for an AI coach cut assessment time and boosted accuracy for brand-new clinicians.

03

How this fits with other research

Hansard et al. (2018) and Al-Nasser et al. (2019) already proved that short self-instruction packets or a single video can teach novices to run preference assessments with near-perfect fidelity. Griffen et al. (2024) extends that line: it keeps the self-paced format but swaps static pictures or voice-over for an interactive AI that talks back.

Snyder et al. (2012) showed that a 2-minute video MSWO can match the results of a live MSWO for most autistic children. Griffen does not test child outcomes; instead it targets the trainer's skill, filling the other half of the puzzle—how to get adults to run the tool correctly.

Lionello-DeNolf et al. (2025) also used computer-based training, but they taught staff to observe discrete trials. Griffen moves the same idea to preference assessment and adds live AI feedback, showing the tech can travel across ABA tasks.

04

Why it matters

If you train RBTs, graduate students, or new hires, you can now replace long lectures with an AI coach that fits in a tablet. The app shortens training time and still hits high fidelity, freeing you to supervise other cases. Try loading an MSWO script into a simple AI feedback program during your next staff orientation and track minutes saved per assessment—you may reclaim an entire billable hour each day.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Film your best MSWO demo, load it into an AI feedback app, and have your next trainee test it live while you measure time and fidelity.

02At a glance

Intervention
behavioral skills training
Design
single case other
Sample size
7
Population
intellectual disability, developmental delay
Finding
positive
Magnitude
medium

03Original abstract

Individuals with intellectual and developmental disabilities (IDD) face many barriers to meaningful inclusion, including limited language and communication skills. Professionals, such as speech-language pathologists (SLPs), can provide personalized instruction to promote skill development and inclusion. Providing opportunities for individuals to express preferences and choice, such as the multiple stimulus without replacement preference assessment (MSWO; DeLeon & Iwata 1996), within these programs, further increases skill acquisition and social interaction. However, limitations in professionals’ knowledge and skills in performing assessments can be another barrier to meaningful inclusion for individuals with IDD and traditional training methods can be challenging and time consuming. The purpose of the current study was to compare the use of artificial intelligence with traditional pen and paper self-instructional MSWO training methods for five preservice SLPs. Fidelity of implementation and duration of assessment were measured. Results demonstrated a large increase in implementation fidelity for two participants, a moderate increase for two participants and a slight increase for the remaining participant while using artificial intelligence. All participants demonstrated a decrease in scoring errors using artificial intelligence. Regarding duration of implementation, artificial intelligence resulted in a significant reduction for four participants and a moderate reduction for the remaining participant. Results of the follow-up survey suggest that all adult participants and both child participants found that artificial intelligence had a higher treatment acceptability and was more effective at producing socially significant outcomes than traditional methods. Recommendations for clinicians and future research are discussed.

Journal of Developmental and Physical Disabilities, 2024 · doi:10.1007/s10882-024-09976-2