Selection dynamics in joint matching to rate and magnitude of reinforcement.
One equation predicts choice when both how often and how big the reinforcer change.
01Research in Context
What this study did
The team built a digital organism. It lived inside a computer and could 'choose' between two levers.
Each lever paid off with different rates and different amounts of food. The program let the virtual creature reproduce when it earned more food.
The study asked: will the fake animal match its choices to the combined payoff like real animals do?
What they found
The digital critter followed the matching law. It picked levers in the same ratio as the combined rate and size of payoff.
A single joint equation fit the data. The same curve works for live pigeons and for code.
How this fits with other research
McDowell (2004) built the first virtual organism that showed hyperbolic matching with rate only. The new study adds magnitude and still gets matching, so the model now covers richer payoffs.
Fantino (1969) saw matching break down when payoff rates were extreme. The 2012 model keeps matching stable because it blends rate and size into one value, smoothing out the extremes.
Hoch et al. (2007) used matching equations on real kids with disabilities in classrooms. The 2012 paper gives those practitioners confidence that the same math holds even when both how often and how big the reinforcer change at once.
Why it matters
You can trust the matching law when both rate and size of reinforcement vary. That happens daily in schools and clinics: small frequent praise versus big rare treats. The study says you do not need two separate rules; one joint equation predicts choice. Use it to set reinforcement schedules that keep behavior steady even when payoff size must change.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Plot your client's current rate and size of reinforcement for each option; check if their behavior ratio matches the combined payoff ratio and adjust one factor if it does not.
02At a glance
03Original abstract
Virtual organisms animated by a selectionist theory of behavior dynamics worked on concurrent random interval schedules where both the rate and magnitude of reinforcement were varied. The selectionist theory consists of a set of simple rules of selection, recombination, and mutation that act on a population of potential behaviors by means of a genetic algorithm. An extension of the power function matching equation, which expresses behavior allocation as a joint function of exponentiated reinforcement rate and reinforcer magnitude ratios, was fitted to the virtual organisms' data, and over a range of moderate mutation rates was found to provide an excellent description of their behavior without residual trends. The mean exponents in this range of mutation rates were 0.83 for the reinforcement rate ratio and 0.68 for the reinforcer magnitude ratio, which are values that are comparable to those obtained in experiments with live organisms. These findings add to the evidence supporting the selectionist theory, which asserts that the world of behavior we observe and measure is created by evolutionary dynamics.
Journal of the experimental analysis of behavior, 2012 · doi:10.1901/jeab.2012.98-199