ABA Fundamentals

The Emergence of Stimulus Relations: Human and Computer Learning

Ninness et al. (2018) · Perspectives on Behavior Science 2018
★ The Verdict

Run the lesson through a neural-network model first; if the computer forms equivalence classes, your human learners probably will too.

✓ Read this if BCBAs who run stimulus-equivalence or language programs in clinics or schools.
✗ Skip if Practitioners who only do direct-care reduction of problem behavior.

01Research in Context

01

What this study did

Ninness and colleagues built three computer models that learn like people do. The programs—RELNET, EVA, and a compound-stimulus tool—were trained to form stimulus classes without being told every match.

The team ran hundreds of fake trials first. They wanted to see if the models would pick the right comparisons after only a few taught links, just like humans in equivalence experiments.

02

What they found

The simulations passed equivalence tests almost every time. After learning A=B and B=C, the networks correctly chose A=C and C=A without extra training.

Error patterns looked like college-student data: more mistakes when the chain was longer or when stimuli looked alike.

03

How this fits with other research

Johansson (2025) goes further. Where Ninness models simple equivalence, Johansson’s NARS engine shows full arbitrarily applicable relational responding—mutual entailment and transformation of function—so the idea now covers metaphor and rule-governed behavior, too.

Winett et al. (1991) warned that we still need tight human labs. Ninness answers that call by letting the computer do the pilot work, saving live sessions for the final check.

WFradet et al. (2025) use a different algorithm, but both papers offload teaching choices to code. One shapes motor sequences; the other shapes stimulus relations—same labor-saving spirit.

04

Why it matters

You can test a full equivalence protocol on your laptop before a single participant shows up. If the model fails, tweak the training order, stimulus size, or feedback schedule until it passes. Then run the polished version with real learners and collect cleaner data in fewer sessions. Think of it as a digital pilot subject that never gets tired.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Download the free EVA script, plug in your next A-B-C stimulus set, and only teach the A-B and B-C links—let the model tell you if emergence looks likely before you start trials with your client.

02At a glance

Intervention
stimulus equivalence training
Design
theoretical
Population
neurotypical
Finding
not reported

03Original abstract

Traditionally, investigations in the area of stimulus equivalence have employed humans as experimental participants. Recently, however, artificial neural network models (often referred to as connectionist models [CMs]) have been developed to simulate performances seen among human participants when training various types of stimulus relations. Two types of neural network models have shown particular promise in recent years. RELNET has demonstrated its capacity to approximate human acquisition of stimulus relations using simulated matching-to-sample (MTS) procedures (e.g., Lyddy & Barnes-Holmes Journal of Speech and Language Pathology and Applied Behavior Analysis, 2, 14–24, 2007). Other newly developed connectionist algorithms train stimulus relations by way of compound stimuli (e.g., Tovar & Chavez The Psychological Record, 62, 747–762, 2012; Vernucio & Debert The Psychological Record, 66, 439–449, 2016). What makes all of these CMs interesting to many behavioral researchers is their apparent ability to simulate the acquisition of diversified stimulus relations as an analogue to human learning; that is, neural networks learn over a series of training epochs such that these models become capable of deriving novel or untrained stimulus relations. With the goal of explaining these quickly evolving approaches to practical and experimental endeavors in behavior analysis, we offer an overview of existing CMs as they apply to behavior–analytic theory and practice. We provide a brief overview of derived stimulus relations as applied to human academic remediation, and we argue that human and simulated human investigations have symbiotic experimental potential. Additionally, we provide a working example of a neural network referred to as emergent virtual analytics (EVA). This model demonstrates a process by which artificial neural networks can be employed by behavior–analytic researchers to understand, simulate, and predict derived stimulus relations made by human participants. The online version of this article (doi:10.1007/s40614-017-0125-6) contains supplementary material, which is available to authorized users.

Perspectives on Behavior Science, 2018 · doi:10.1007/s40614-017-0125-6