Assessment & Research

A comparison of methods for collecting data on performance during discrete trial teaching.

Lerman et al. (2011) · Behavior analysis in practice 2011
★ The Verdict

Logging only the first trial each session can fool you into thinking a child has mastered a skill—add prompt notes or sample more trials before you move on.

✓ Read this if BCBAs and RBTs running discrete-trial sessions who want faster data without losing accuracy.
✗ Skip if Practitioners already using full trial-by-trial plus prompt data sheets.

01Research in Context

01

What this study did

Spanoudis et al. (2011) watched therapists teach kids with autism using discrete trials. They compared two ways to record data: writing down only the first trial each session or writing down every trial plus how much help the child needed.

The team wanted to know if the quick first-trial method still caught when a skill was truly mastered.

02

What they found

First-trial data gave a rough picture of the child’s progress. It often said "mastered" too soon. Adding prompt-level notes caught the real picture better.

In short, the fast way saved time but risked moving on too early.

03

How this fits with other research

Giunta‐Fede et al. (2016) ran a similar test and also saw that sparse data can miss learning. They found that recording every trial helped kids learn tacts a bit faster than probe-only trials. This backs up the 2011 warning.

Lionello‐DeNolf et al. (2025) took the problem one step further. They built a computer course called Train-to-Code that teaches staff to watch and record trials accurately. After the course, therapists made fewer mistakes when running DTT. The 2011 paper showed the danger of sloppy notes; the 2025 paper gives a fix.

Carroll et al. (2018) and Peters et al. (2013) looked at other quick checks in DTT—short tests that pick the best error-correction tactic. All these studies share the same goal: save staff time without hurting child learning.

04

Why it matters

If you only log the first trial, you might think a child has mastered a skill and move on. Check at least three trials or add prompt notes before you fade teaching. The extra minute protects you from giving the child harder work too soon.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one program, record prompt level for every trial this week, then compare the mastery decision with last week’s first-trial-only sheet.

02At a glance

Intervention
not applicable
Design
other
Sample size
11
Population
autism spectrum disorder
Finding
mixed

03Original abstract

Therapists of children with autism use a variety of methods for collecting data during discrete-trial teaching. Methods that provide greater precision (e.g., recording the prompt level needed on each instructional trial) are less practical than methods with less precision (e.g., recording the presence or absence of a correct response on the first trial only). However, few studies have compared these methods to determine if less labor-intensive systems would be adequate to make accurate decisions about child progress. In this study, precise data collected by therapists who taught skills to 11 children with autism were reanalyzed several different ways. For most of the children and targeted skills, data collected on just the first trial of each instructional session provided a rough estimate of performance across all instructional trials of the session. However, the first-trial data frequently led to premature indications of skill mastery and were relatively insensitive to initial changes in performance. The sensitivity of these data was improved when the therapist also recorded the prompt level needed to evoke a correct response. Data collected on a larger subset of trials during an instruction session corresponded fairly well with data collected on every trial and revealed similar changes in performance.

Behavior analysis in practice, 2011 · doi:10.1007/BF03391775