ABA Fundamentals

Modeling arbitrarily applicable relational responding with the non-axiomatic reasoning system: a Machine Psychology approach

Johansson (2025) · Frontiers in Robotics and AI 2025
★ The Verdict

An AI system can duplicate human arbitrarily applicable relational responding, giving BCBAs a quick way to pre-test stimulus-equivalence protocols.

✓ Read this if BCBAs who build stimulus-equivalence or relational-frame programs for school or clinic settings.
✗ Skip if Practitioners focused only on reduction of problem behavior with no language or academic goals.

01Research in Context

01

What this study did

Johansson (2025) built a computer mind called NARS.

He told it only A = B and B = C.

Then he asked if A = C.

The program said yes, even though no one taught it that pair.

He ran the test thousands of times with new letters and shapes.

Each time the system derived the untaught relations.

02

What they found

The AI acted like a human doing arbitrarily applicable relational responding.

It showed mutual entailment: if A = B, then B = A.

It showed combinatorial entailment: if A = B and B = C, then A = C.

The machine also changed how it felt about a picture after pairing it with good or bad words, just like people do in stimulus-function tests.

03

How this fits with other research

Ninness et al. (2018) did the same trick earlier with a neural net named EVA.

Their model learned relations by weight changes; Johansson swaps in NARS logic rules.

The new paper keeps the goal but upgrades the engine, so it supersedes the 2018 work.

Ellingsen et al. (2014) showed pigeons can treat two keys as the same after shared food.

That animal study sits at a simpler level: associative, not derived.

Together the papers form a ladder: pigeons match keys, people derive relations, computers now copy the people.

04

Why it matters

You now have a free test pilot.

Before running a stimulus-equivalence lesson with a learner, plug the targets into NARS.

If the AI fails to derive the relation, your training sequence probably needs more baseline pairs.

This five-minute check can save you hours of faulty trials and frustrated clients.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Download the open-source NARS package, enter your next equivalence set, and run the derivation check before your first learner session.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Arbitrarily Applicable Relational Responding (AARR) is a cornerstone of human language and reasoning, referring to the learned ability to relate symbols in flexible, context-dependent ways. In this paper, we present a novel theoretical approach for modeling AARR within an artificial intelligence framework using the Non-Axiomatic Reasoning System (NARS). NARS is an adaptive reasoning system designed for learning under uncertainty. We introduce a theoretical mechanism called acquired relations, enabling NARS to derive symbolic relational knowledge directly from sensorimotor experiences. By integrating principles from Relational Frame Theory—the behavioral psychology account of AARR—with the reasoning mechanisms of NARS, we conceptually demonstrate how key properties of AARR (mutual entailment, combinatorial entailment, and transformation of stimulus functions) can emerge from NARS’s inference rules and memory structures. Two theoretical demonstrations illustrate this approach: one modeling stimulus equivalence and transfer of function, and another modeling complex relational networks involving opposition frames. In both cases, the system logically demonstrates the derivation of untrained relations and context-sensitive transformations of stimulus functions, mirroring established human cognitive phenomena. These results suggest that AARR—long considered uniquely human—can be conceptually captured by suitably designed AI systems, emphasizing the value of integrating behavioral science insights into artificial general intelligence (AGI) research. Empirical validation of this theoretical approach remains an essential future direction.

Frontiers in Robotics and AI, 2025 · doi:10.3389/frobt.2025.1586033