Resolving Barriers to an Applied Science of the Human Condition: Rule Governance and the Verbal Behavior of Applied Scientists.
Your favorite research rule might be blinding you—check it against the client’s data right now.
01Research in Context
What this study did
Ivancic et al. (2019) looked at how behavior analysts talk about rules. They asked, "Do our own words block us from seeing what clients actually do?"
The paper is a think-piece, not an experiment. It uses Skinner’s idea of rule-governed behavior to explain why applied scientists keep using weak methods even when data say stop.
What they found
The team says we follow verbal rules like "use RCTs" or "p < .05" even when those rules hurt our work with real people.
This rule-following keeps us from noticing when a client does better with a simpler, non-standard plan.
How this fits with other research
Israel (1978) saw the same problem forty years earlier. That paper said fights inside behavior analysis are really fights between theory talk and tech talk. Martin updates the story: our talk itself is the trap.
Sorrell et al. (2025) and Allen et al. (2016) show the fix. Both used flexible video or mixed-reality coaching so teachers could see child data right away. Their trainees broke old rules and still got good outcomes.
Hinson (1988) and Joslyn et al. (2024) also warn about rigid rules, but they target stats instead of words. Together these papers form one big warning: any rule you can’t test against today’s data will slow you down.
Why it matters
Next time you catch yourself saying, "We have to run five baseline points," stop. Ask, "Did the last two points already show the trend?" Check your rule against the client in front of you. If the data say move, move. That small pause keeps science tied to people, not to slogans.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one default rule you use (e.g., "10 trial sessions"). Run the next case until the data meet your decision criteria, even if it’s only 5 trials. Note if outcomes change.
02At a glance
03Original abstract
Several articles have recently questioned the distinction between acceptance and commitment therapy (ACT) and traditional cognitive therapy (CT). This study presents a reanalysis of data from Zettle and Rains that compared 12 weeks of group CT with group ACT. For theoretical reasons, Zettle and Rains also included a modified form of CT that did not include distancing, and no intent-to-treat analysis was included. Particularly because that unusual third condition did somewhat better than the full CT package, it contaminated the direct comparison of ACT and CT, which has of late become theoretically interesting. In the present study, data from participants in the ACT and CT conditions were reanalyzed. ACT was shown to produce greater reductions in levels of self-reported depression using an intent-to-treat analysis. Posttreatment levels of cognitive defusion mediated this effect at follow-up. The occurrence of depressogenic thoughts and level of dysfunctional attitudes did not function as mediators. This study adds additional evidence that ACT works through distinct and theoretically specified processes that are not the same as CT.
The Analysis of verbal behavior, 2019 · doi:10.1177/0145445511398344