ABA Fundamentals

A computational model of selection by consequences.

McDowell (2004) · Journal of the experimental analysis of behavior 2004
★ The Verdict

Reinforcement works like natural selection inside a computer, yielding the same matching law we see in living organisms.

✓ Read this if BCBAs who teach matching law or need a fresh metaphor for caregivers.
✗ Skip if Clinicians looking for direct treatment protocols—this is pure theory.

01Research in Context

01

What this study did

The team built a digital organism inside a computer. The program let tiny agents reproduce, mutate, and die based on how many reinforcers they earned on random-interval (RI) schedules. No animals or people were used—just code that lived or died by its own choices.

02

What they found

After many generations, the virtual creatures produced the same hyperbolic matching curve seen in real pigeons, rats, and humans. Selection, reproduction, and mutation alone were enough to create the matching law without any mental math or rules.

03

How this fits with other research

Kulubekova et al. (2013) later showed the same digital organism also replicates preference shifts on concurrent schedules, extending the 2004 RI result to choice situations. Rojahn et al. (2012) added reinforcement magnitude as a second variable and still got matching, proving the model is robust. Baum (2017) gave the idea a formal math coat by restating selection-by-consequences with the Price equation, deepening the theory. Together, these papers turn a single simulation into a growing family of models that predict live-organism data.

04

Why it matters

You can now explain the matching law to teachers or parents with a simple story: behaviors that produce more reinforcers multiply; the rest fade out. When you see a client dividing time between two tasks, picture tiny virtual variants competing—reinforcement is the habitat that lets some survive. Use this framing to clarify why increasing reinforcement for target responses naturally crowds out problem behavior.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Tell your team: 'Behaviors are like digital organisms—only the ones that get fed survive; let's feed the skills we want to see more of.'

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior.

Journal of the experimental analysis of behavior, 2004 · doi:10.1901/jeab.2004.81-297