A parametric analysis of procedural fidelity errors following mastery of a task: A translational study
Keep procedural fidelity high even after learners master a skill—errors still degrade performance.
01Research in Context
What this study did
Falakfarsa et al. (2023) asked what happens when you let small mistakes slip in after a learner has already mastered a task.
They worked with college students who had just learned a matching task. The team then ran short sessions where the experimenter followed the script perfectly, made tiny errors, or made bigger errors.
The students’ accuracy and speed were tracked to see if post-mastery slip-ups mattered.
What they found
Even after the students hit mastery, higher fidelity kept their performance high.
When the experimenter drifted—even a little—the students started to match more slowly and make more errors.
The takeaway: mastery does not protect against the damage of sloppy delivery.
How this fits with other research
Sarber et al. (1983) saw a similar fade-out effect, but with probe difficulty instead of fidelity. They warned that hard probe trials after training can fake you out with false errors; Falakfarsa shows the errors can also come from the teacher’s side.
Parry-Cruwys et al. (2022) proved that short, mastery-based online modules can quickly bring grad students to 90 % plus accuracy. Falakfarsa adds the next step: once that accuracy is reached, you have to keep running the script exactly or it will drop.
Together the three studies draw one clear line: mastery is not a finish line—it is a maintenance zone that still needs precision.
Why it matters
If you run fluency checks, maintenance trials, or parent training, keep your own behavior tight. A drift in prompt timing, wording, or reinforcement size can quietly undo the skill you just taught. Build a simple fidelity checklist and use it every time you revisit "mastered" targets.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pull out one mastered program, run five trials, and score yourself on a three-item fidelity sheet—fix any drift on the spot.
02At a glance
03Original abstract
Procedural fidelity is defined as the extent to which the independent variable is implemented as prescribed. Research using computerized tasks has shown that fidelity errors involving consequences for behavior can hinder skill acquisition. However, studies examining the effects of these errors once skills have been mastered are lacking. Thus, this translational study investigated the effects of varying levels of fidelity following mastery of a computerized arbitrary matching-to-sample task. A group design (consisting of five groups) was used in which college students initially completed 250 trials during which no programmed errors (i.e., perfect fidelity) were arranged, followed by an additional 250 trials with consequences delivered across various levels of fidelity (i.e., 20, 40, 60, 80, and 100% of trials administered without errors). The results showed that participants assigned to higher fidelity conditions performed better (on average). These results extended the findings of previous studies by demonstrating how errors involving consequences affect behavior across various stages of learning.
Journal of Applied Behavior Analysis, 2023 · doi:10.1002/jaba.992