An implementation of punishment in the evolutionary theory of behavior dynamics
Punishment can be modeled as gene-like mutation, and newer equations now make that insight plug-and-play.
01Research in Context
What this study did
McDowell et al. (2019) built tiny computer organisms. Each organism had genes that could mutate.
The team added punishment to the mix. They watched how the genes changed the animals' choices.
The model ran thousands of generations. It had to match three old lab facts about punishment.
What they found
The digital critters copied real punishment curves. Their choices shifted just like pigeons in 1960s boxes.
The key was treating punishment as a context mutation. Bad results tweaked the gene pool, not just the next peck.
How this fits with other research
Klapes et al. (2025) now give you a ready-made equation. Their cGML model beats five rivals on 30 human data sets. You can skip the evolution talk and still fit punishment curves with 99% support.
Hagopian et al. (2023) took the same ETBD engine and ran it on head-banging and escape cases. The same code that played pigeon games also forecast clinical treatment success.
Higginbotham et al. (2025) used the engine for delay discounting. One framework now handles both punishment and impulsive choices, showing the model keeps stretching.
Why it matters
You now have layers. Use the 2025 cGML spreadsheet for quick data fitting. Use the 2019 evolutionary story when you need to explain why punishment side-effects linger. Both come from the same roots, so you can switch between fast math and deep theory without contradiction.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Download the free cGML calculator from Klapes et al. (2025) and fit last week's punishment data in under five minutes.
02At a glance
03Original abstract
An implementation of punishment in the evolutionary theory of behavior dynamics is proposed, and is applied to responding on concurrent schedules of reinforcement with superimposed punishment. In this implementation, punishment causes behaviors to mutate, and to do so with a higher probability in a lean reinforcement context than in a rich one. Computational experiments were conducted in an attempt to replicate three findings from experiments with live organisms. These are (1) when punishment is superimposed on one component of a concurrent schedule, response rate decreases in the punished component and increases in the unpunished component, (2) when punishment is superimposed on both components at equal scheduled rates, preference increases over its no-punishment baseline, and (3) when punishment is superimposed on both components at rates that are proportional to the scheduled rates of reinforcement, preference remains unchanged from the baseline preference. Artificial organisms animated by the theory, and working on concurrent schedules with superimposed punishment, reproduced all of these findings. Given this outcome, it may be possible to discover a steady-state mathematical description of punished choice in live organisms by studying the punished choice behavior of artificial organisms animated by the evolutionary theory.
Journal of the Experimental Analysis of Behavior, 2019 · doi:10.1002/jeab.543