On the Evidence for Interactive Effects During and Following Synthesized Contingency Assessments
Synthesized contingency assessments rarely show true interactive reinforcement—test individual contingencies before assuming synergy.
01Research in Context
What this study did
McCabe et al. (2025) looked at three past studies that used synthesized contingency assessments.
These assessments mix two or more reinforcers at once to see if they work better together.
The team asked a simple question: do the reinforcers truly boost each other, or do they just add up?
What they found
Only a small share of cases showed a real interactive effect.
Most times the combined reinforcers acted no better than each one alone.
The idea that synergy is common in these tests was not supported.
How this fits with other research
Logue et al. (1986) first urged analysts to hunt for natural contingencies that keep behavior strong after treatment ends. Their call set the stage for later combo tests, but they never promised synergy would appear.
Huntington et al. (2023) also used a systematic review to audit how we measure things. They counted social-validity reports, while McCabe counted synergy reports. Both papers show gaps in what we assume our assessments tell us.
Cividini-Motta et al. (2024) found that tweaking schedule parameters helps skill acquisition. Their focus on single, clear contingencies lines up with McCabe’s warning: test each reinforcer alone before you blend them.
Why it matters
If you run a synthesized FA and see a jump in problem behavior, do not assume the reinforcers are working as a team. Test each one separately first. This extra step can save you from picking a bulky, unnecessary intervention package. Keep it simple until data prove synergy.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Run a brief alone condition for each suspected reinforcer before you bundle them in your FA.
02At a glance
03Original abstract
ABSTRACTSynthesized contingency assessments often arrange multiple stimulus changes (e.g., terminating instructions and providing interactive toy play) to follow problem behavior and to occur response independently across test and control conditions, respectively. A central premise of this approach to functional behavior assessment is that individual contingencies interact when delivered together, producing a reinforcing effect greater than the sum of its parts (i.e., the reinforcing effects of the individual contingencies programmed). Across three studies, we evaluated how often within‐participant evaluations from the published literature are consistent with this assumption during (Studies 1 and 2) and following (Study 3) the assessment process. Our results suggest that although such interaction can occur, it appears to do so only in a minority of cases. Implications of these findings for practice are discussed.
Behavioral Interventions, 2025 · doi:10.1002/bin.2074