Assessment & Research

Mastery Criteria and Maintenance: a Descriptive Analysis of Applied Research Procedures

McDougale et al. (2020) · Behavior Analysis in Practice 2020
★ The Verdict

Use 90-a large share accuracy over fewer trials and always run a maintenance probe next week.

✓ Read this if BCBAs writing skill-acquisition programs in clinics or schools.
✗ Skip if Researchers focused only on behavior reduction or functional analysis.

01Research in Context

01

What this study did

The team read 167 behavior-analytic studies that taught new skills.

They wrote down how each study said a learner had "mastered" the skill.

They also checked if the study later tested whether the skill stuck around.

02

What they found

Most studies picked easy mastery bars: a large share correct over many trials.

But the few that set the bar higher (90-a large share correct over fewer trials) saw skills last longer.

One in three studies never told readers what mastery meant.

Half never checked if the skill was still there weeks later.

03

How this fits with other research

Schaaf et al. (2015) also found fuzzy rules. They showed that changing DSM autism cut-offs can drop 25-a large share of diagnoses. Both papers warn: unclear criteria change outcomes.

Sanders (2009) argued that tiny label shifts create big real-world gaps. McDougale et al. (2020) show the same happens in mastery rules—small changes in accuracy or trial count decide if a child keeps the skill.

Pathak et al. (2019) found that high-IQ autistic kids often look fine on paper but struggle in daily life. Together, these studies push you to measure what matters and spell out exactly how you decide success.

04

Why it matters

Raise your mastery bar to 90-a large share over 2-3 short blocks. Then probe the skill next week. Clear rules and quick checks help kids keep what they learn—and help you prove it.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one current program, bump the mastery criterion to a large share over two consecutive sets of five trials, and schedule a cold probe seven days later.

02At a glance

Intervention
not applicable
Design
other
Finding
not reported

03Original abstract

Behavioral practitioners and researchers often define skill acquisition in terms of meeting specific mastery criteria. Only 2 studies have systematically evaluated the impact of any dimension of mastery criteria on skill maintenance. Recent survey data indicate practitioners often adopt lower criterion levels than are found to reliably produce maintenance. Data regarding the mastery criteria adopted by applied researchers are not currently available. This study provides a descriptive comparison of mastery criteria reported in behavior-analytic research with that utilized by practitioners. Results indicate researchers are more likely to adopt higher levels of accuracy across fewer observations, whereas practitioners are more likely to adopt lower levels of accuracy across more observations. Surprisingly, a large amount of research (a) lacks technological descriptions of the mastery criterion adopted and (b) does not include assessments of maintenance following acquisition. We discuss implications for interpretations within and across research studies.

Behavior Analysis in Practice, 2020 · doi:10.1007/s40617-019-00365-2