Improvement of procedural fidelity in discrete‐trial programs using computer‐based instruction to teach skilled observation
A short computer lesson with live prompts quickly raises staff fidelity in discrete trials.
01Research in Context
What this study did
Lionello‐DeNolf et al. (2025) tested a new computer lesson called Train-to-Code. The goal was to help staff run discrete trials the right way every time.
Workers first tried a short set of trials on the computer. The program gave live cues and instant feedback. Later the same workers ran real trials with a child to see if the lesson stuck.
What they found
After the computer lesson, staff followed the trial steps much better. Errors dropped and good moves stayed high when they worked with real learners.
The gains showed up fast and held without extra coaching.
How this fits with other research
Machado et al. (2021) also used a computer to train observers. Their group cut scoring errors while watching fast videos. Both studies prove a short, automated module can lift fidelity, one for scoring and one for teaching.
Al-Nasser et al. (2019) and Hansard et al. (2018) got similar boosts with paper packets or solo videos. Train-to-Code adds dynamic prompts that change as you work, an update to those earlier static tools.
Spanoudis et al. (2011) warned that first-trial data can miss mistakes. The new program sidesteps this by logging every response during practice, matching that older caution with tighter measurement.
Why it matters
You can plug staff into Train-to-Code for thirty minutes and see cleaner trials the same day. No extra trainer time, no printouts. Use it during onboarding or as a quick tune-up before starting new programs.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Have each staff member complete the Train-to-Code module, then run five practice trials and score yourself with the same checklist.
02At a glance
03Original abstract
Procedural fidelity is an important component of behavioral intervention programs. The Train-to-Code software was used to teach skilled observation of implementation of three types of discrete-trial programs, and improvement to procedural fidelity was assessed. Participants completed a training package that involved coding video examples and non-examples of a teacher delivering each discrete trial program. The degree of prompting given to the trainee increased or decreased dynamically during training sessions based on participants' coding accuracy. The efficacy of the training was tested within subjects via pre- and posttest role plays in which participants delivered discrete-trial programs to a scripted research assistant. Results indicated substantial improvement in discrete trial delivery at posttest. These results suggest that Train-to-Code may be an effective method for training delivery of discrete trial programs in applied settings.
Journal of Applied Behavior Analysis, 2025 · doi:10.1002/jaba.70027