Comparing Multiple Methods to Measure Procedural Fidelity of Discrete-trial Instruction
Likert and global fidelity checklists can hide teaching errors—use component-by-component occurrence recording when you need true adherence data.
01Research in Context
What this study did
Bergmann et al. (2023) tested three ways to score how well adults run discrete-trial lessons. They compared a simple 1-to-5 Likert scale, a quick global yes/no checklist, and a detailed step-by-step score sheet.
The team watched the same set of teaching clips and recorded errors with each tool. They wanted to know which method catches real mistakes without taking forever.
What they found
The Likert and global checklists missed half the errors and gave high fidelity scores that looked too good. The step-by-step occurrence sheet caught every missed prompt, wrong pause, and extra cue, but it took the longest to finish.
In short, fast tools feel good but hide problems. Slow tools feel clunky but tell the truth.
How this fits with other research
Paden et al. (2025) extends this work. They showed that letting staff watch and score their own videos keeps fidelity high even when you are not watching. Pair their self-monitoring with Bergmann’s detailed sheet and you get honest data that stays honest.
Lam et al. (2011) found a cousin problem. They showed that partial-interval recording inflates agreement scores for duration events, just like Likert scales inflate fidelity here. Both papers warn that easy scoring can trick you into thinking everything is fine.
Levin et al. (2014) and Eldevik et al. (2013) proved that quick computer training can raise fidelity fast. Bergmann’s work says once staff are trained, you still need the picky score sheet to prove the skills stick.
Why it matters
If you use a single Likert item to check DTI, you may miss errors that slow learner progress. Swap to component-by-component occurrence recording at least once a month, or any time you see flat data. It takes a few extra minutes, but you catch drift before it hurts the child.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one learner program, watch a session, and score it with the step-by-step sheet to see if errors are hiding.
02At a glance
03Original abstract
Procedural fidelity is the extent to which an intervention is implemented as designed and is an important component of research and practice. There are multiple ways to measure procedural fidelity, and few studies have explored how procedural fidelity varies based on the method of measurement. The current study compared adherence to discrete-trial instruction protocols by behavior technicians with a child with autism when observers used different procedural-fidelity measures. We collected individual-component and individual-trial fidelity with an occurrence–nonoccurrence data sheet and compared these scores to global fidelity and all-or-nothing, 3-point Likert scale, and 5-point Likert scale measurement methods. The all-or-nothing method required all instances of a component or trial be implemented without error to be scored correct. The Likert scales used a rating system to score components and trials. At the component level, we found that the global, 3-point Likert, and 5-point Likert methods were likely to overestimate fidelity and mask component errors, and the all-or-nothing method was unlikely to mask errors. At the trial level, we found that the global and 5-point Likert methods approximated individual-trial fidelity, the 3-point Likert method overestimated fidelity, and the all-or-nothing method underestimated fidelity. The occurrence–nonoccurrence method required the most time to complete, and all-or-nothing by trial required the least. We discuss the implications of measuring procedural fidelity with different methods of measurement, including false positives and false negatives, and provide suggestions for practice and research. The online version contains supplementary material available at 10.1007/s43494-023-00094-w.
Education & Treatment of Children, 2023 · doi:10.1007/s43494-023-00094-w