ABA Fundamentals

Computational model of selection by consequences: patterns of preference change on concurrent schedules.

Kulubekova et al. (2013) · Journal of the experimental analysis of behavior 2013
★ The Verdict

A computer organism that keeps successful responses and drops others reproduces the exact preference shifts seen in live animals, giving you a cheap test-bed for schedule changes.

✓ Read this if BCBAs who tweak concurrent reinforcement schedules in classrooms or clinics.
✗ Skip if Clinicians looking for ready-made client programs; this is a lab model, not a treatment manual.

01Research in Context

01

What this study did

Saule and colleagues built a computer 'organism.' It picks one of two levers.

Each lever pays off on its own timer, like two vending machines with different restock speeds.

When the payoff rates change, the program 'mutates' and keeps the moves that earn more points.

The team asked: will this digital creature shift its choices the same way real animals do?

02

What they found

Yes. The virtual player moved its time and responses to match the new payoff ratio.

The size and speed of the shift looked almost identical to pigeon and rat data.

A simple rule—keep what works, drop what doesn’t—was enough to copy the matching law.

03

How this fits with other research

McDowell (2004) ran the first test like this on single schedules; Kulubekova et al. (2013) widened it to two-lever choice.

Rojahn et al. (2012) later added reward size as well as rate; together the three papers show the same Darwin-like engine handles richer tasks.

Avellaneda et al. (2025) went back to live rats and found the sensitivity number itself drifts when overall payoff rises.

Their update does not kill the 2013 model; it just lets the ‘keenness’ dial move, giving you a smoother curve for your Excel sheet.

04

Why it matters

If a three-line code rule can mimic your client’s choice patterns, you can test schedule tweaks on a laptop before touching the classroom.

Try running a quick simulation when you plan to fade richer to leaner reinforcement. If the virtual kid stalls, your real kid probably will too—so adjust the steps before session one.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Open the free virtual-organism spreadsheet, plug in your current two-lever schedule values, and preview how long the shift to the new ratio should take before you run it with your learner.

02At a glance

Intervention
not applicable
Design
other
Finding
positive

03Original abstract

The computational model of selection by consequences is an ontogenetic dynamic account of adaptive behavior based on the Darwinian principle of selection by consequences. The model is a virtual organism based on a genetic algorithm, a class of computational algorithms that instantiate the principles of selection, fitness, reproduction and mutation. The computational model has been thoroughly tested in experiments with a variety of single alternative and concurrent schedules. A number of published reports demonstrate that the model generates patterns of behavior that are quantitatively equivalent to the findings from live organisms. The experiments and analyses in this study assess the behavior of the computational model for evidence of preference change phenomena in environments with rapidly changing reinforcement rate ratios. Molar and molecular effects of behavioral adjustment were consistent with those observed in live organisms. The results of this study provide strong evidence supporting the selectionist account of adaptive behavior.

Journal of the experimental analysis of behavior, 2013 · doi:10.1002/jeab.40