Assessment & Research

Modeling modeling.

Killeen (1999) · Journal of the experimental analysis of behavior 1999
★ The Verdict

Pick the simplest model that still explains the behavior—extra features rarely pay for themselves.

✓ Read this if BCBAs who build data sheets, train staff, or review assessment tools.
✗ Skip if Clinicians who only use publisher-made protocols and never touch the back-end data.

01Research in Context

01

What this study did

Parmenter (1999) wrote a short, sharp essay. It tells us to stop asking, “Does my model have the newest bells and whistles?”

Instead, ask, “Does the model explain a lot while staying simple?” Count the benefit, then count the cost.

The paper uses no data. It is a call to think before you add another variable to your spreadsheet.

02

What they found

The author found no new numbers. He found a rule: pick the cheapest model that still tells the story.

A plain ABC log that predicts 80 % of outbursts beats a 50-column monster that predicts 82 %.

03

How this fits with other research

Singh et al. (1993) said the same thing six years earlier. They told us to drop long personality tests and watch real behavior. Parmenter (1999) widens the lens to any model you might use.

Christopher et al. (1991) warned that social-validity surveys became empty rituals. Parmenter (1999) agrees: if the survey adds pages but no insight, cut it.

Anonymous (2018) built a tidy FBA ontology years later. The ontology keeps only the fields that matter. That is the benefit-cost idea in code.

Dougherty et al. (1994) ran separate FAs for each behavior shape. Extra sessions cost time, but clearer data save weeks of wrong treatment. Again, benefit beats cost.

04

Why it matters

Next time you open a data sheet, count the columns. If one extra column only adds a sliver of insight, delete it. Your future self—and every RBT who has to fill the sheet—will thank you. Simple models make fast decisions, and fast decisions help clients sooner.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Open your current ABC tracker, hide one column you rarely use, and see if the story stays clear.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Models are tools; they need to fit both the hand and the task. Presence or absence of a feature such as a pacemaker or a cascade is not in itself good. Or bad. Criteria for model evaluation involve benefit-cost ratios, with the numerator a function of the range of phenomena explained, goodness of fit, consistency with other nearby models, and intangibles such as beauty. The denominator is a function of complexity, the number of phenomena that must be ignored, and the effort necessary to incorporate the model into one's parlance. Neither part of the ratio can yet be evaluated for MTS, whose authors provide some cogent challenges to SET.

Journal of the experimental analysis of behavior, 1999 · doi:10.1901/jeab.1999.71-275