A contemporary quantitative model for continuous choice under reinforcing and punishing contingencies
Use the concatenated GML (cGML) model when you need to quantify how punishment shifts choice; it fits human data far better than older additive or subtractive forms.
01Research in Context
What this study did
Klapes et al. (2025) built a new math model for how people choose when both rewards and penalties are in play.
They tested five models against 30 sets of human choice data. The new model is called the concatenated GML (cGML).
Adults in a lab picked between two buttons. One side paid more but also took money away if they picked it too much.
What they found
The cGML model won by a mile. It got 99 out of 100 possible points when the computer judged which model fit best.
Older additive or subtractive versions of the matching law scored far lower.
How this fits with other research
Ayvaci et al. (2024) reviewed 30 single-case studies and said punishment works best when you also front-load reinforcement. Klapes gives you the exact equation to measure that combo.
McDowell et al. (2019) modeled punishment as a mutation inside digital organisms. Klapes keeps the math but swaps fake creatures for real human data, so the numbers now match what you see on the clinic floor.
Ganz et al. (2009) used the plain matching law to show alcohol makes people ignore penalties. Klapes keeps the matching frame but adds a punishment term, so you can track those same people with sharper lenses.
Why it matters
If you run concurrent-operant assessments, plug your data into the cGML instead of the old GML. You will see how much each penalty, not just each reward, drives your client’s choices. That lets you set reinforcement rates and response-cost fines with real numbers instead of guesswork.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Open your last concurrent-operant Excel file, add a cGML column with the new punishment term, and compare the fit to your old GML line.
02At a glance
03Original abstract
We developed five novel quantitative models of punishment based on the generalized matching law (GML). Two of the new models were based on Deluty's additive theory of punishment, two were based on de Villiers's subtractive theory of punishment, and the last was based on the concatenated GML (cGML). Using information criteria, we compared the descriptive accuracies of these models against each other and against the GML. To obtain a data set that fairly compared these complex models, we exposed 30 human participants to 36 concurrent random-interval random-interval reinforcement schedules via a recently developed rapid-acquisition operant procedure (procedure for rapidly establishing steady-state behavior). This experimental design allowed us to fit the models to 30 data sets ranging from 22 to 36 data points each, comparing the models' descriptive accuracy using Akaike information criteria, corrected for small samples (AICc). The punishment model based on the cGML had the lowest AICc value of the set, with an Akaike weight of 0.99. Thus, this cGML-based punishment model is presumed to be the best contemporary quantitative model of punishment. We discuss the theoretical strengths and weaknesses of these models and future directions of GML-based punishment model development.
Journal of the Experimental Analysis of Behavior, 2025 · doi:10.1002/jeab.70009