Assessment & Research

Effects of data sampling on graphical depictions of learning.

Carey et al. (2014) · Journal of applied behavior analysis 2014
★ The Verdict

Sampling only the first few trials or every nth session gives misleading mastery timelines; collect every trial if you need accurate visual decisions.

✓ Read this if BCBAs who graph acquisition programs in clinic or schools.
✗ Skip if Staff who only collect fidelity checklists, not learning graphs.

01Research in Context

01

What this study did

Carey et al. (2014) built fake learning graphs. They removed every second trial or every third session. Then they asked experts to judge when a skill was mastered.

The team timed how long the shortened records saved. They wanted to see if less data still gave the same visual answer.

02

What they found

Short samples lied. First-three-trial slices often made mastery look slower. Every-third-session slices often made mastery look faster.

The time saved was tiny—only minutes. The wrong decisions were big.

03

How this fits with other research

Cook et al. (2020) later showed the same risk with momentary time sampling. They told researchers to spot-check full duration now and then so drift does not hide.

Suhrheinrich et al. (2020) seems to disagree. They found a 3-point fidelity checklist matched trial-by-trial coding and saved hours. The difference is purpose: Mary-Katherine tested learning graphs for mastery timing; Suhrheinrich tested fidelity scores for staff feedback. Different jobs, different rules.

Jessel et al. (2020) echoed the warning. Truncating a 10-min IISCA to 3-5 min weakened experimental control. Less data, weaker truth.

04

Why it matters

If you graph progress for insurance or parents, sample every trial. A skipped point can hide a dip or spike and push you to move on too soon—or stay too long. The few minutes you save are not worth the wrong call.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Plot every trial from last Friday’s session before you decide to bump the criterion.

02At a glance

Intervention
not applicable
Design
other
Population
not specified
Finding
negative

03Original abstract

Continuous and discontinuous data-collection methods were compared in the context of discrete-trial programming. Archival data sets were analyzed using trial sampling (1st 5 trials, 1st 3 trials, and 1st trial only) and session sampling (every other session, every 3rd session, and every 5th session). Results showed that trial sampling systematically underestimated the number of sessions and days to mastery and overestimated the number of sessions and days to the 1st independent response. Session sampling systematically overestimated both sessions and days to mastery and sessions and days to the 1st independent response. A time-savings analysis was included to evaluate empirically how much time would be saved by using each sampling method. Results suggested that data sampling would produce relatively minimal time savings.

Journal of applied behavior analysis, 2014 · doi:10.1002/jaba.153