Natural language processing, pragmatics, and verbal behavior.
Skinner’s verbal operants give AI (and you) a ready-made map for making language fit context and consequences.
01Research in Context
What this study did
Gillberg (1992) asked a bold question: can Skinner’s Verbal Behavior make AI talk like a person? The paper maps each verbal operant to a computer module. The goal is a chatbot that cares about consequences, not just word strings.
No kids, no trials—just a blueprint. The author shows how mands, tacts, and autoclitics could guide a program to pick words that matter to the listener.
What they found
The article argues that pragmatics—why we say things—already lives inside Skinner’s boxes. Feed that structure into code and you get context-smart replies without hidden mental rules.
A sample architecture is sketched: sensors fire tacts, user desire fires mands, autoclitic frames glue the sentence together. The bot learns by consequences, same as a child.
How this fits with other research
Nangle et al. (1993) extends the idea. They prove intraverbals stay separate from tacts in real preschoolers, giving the AI model a clean operant to copy.
Hursh et al. (1974) also extends. Toddlers produced brand-new sentences after simple imitation plus reinforcement, showing Skinnerian rules can create generative language without grammar modules—exactly what the AI paper hopes to code.
Dosen (2005) is a successor paper. It warns that Skinner later dropped his old S-R skin. If you build the bot with rigid stimulus-response links, you would repeat a mistake Skinner already fixed.
Why it matters
You probably won’t code a chatbot this afternoon, but you can borrow the lens. When you write programs or teach staff, think in operants plus consequences instead of ‘meanings inside words.’ Check that your intraverbal drills really are intraverbals—see Nangle et al. (1993). And remember reinforcement can create novel language, so give kids room to combine words you never directly taught, as shown in Hursh et al. (1974).
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Label your next intraverbal lesson as ‘intraverbal,’ not ‘tact,’ and reinforce novel combinations to spark generative language.
02At a glance
03Original abstract
Natural Language Processing (NLP) is that part of Artificial Intelligence (AI) concerned with endowing computers with verbal and listener repertoires, so that people can interact with them more easily. Most attention has been given to accurately parsing and generating syntactic structures, although NLP researchers are finding ways of handling the semantic content of language as well. It is increasingly apparent that understanding the pragmatic (contextual and consequential) dimension of natural language is critical for producing effective NLP systems. While there are some techniques for applying pragmatics in computer systems, they are piecemeal, crude, and lack an integrated theoretical foundation. Unfortunately, there is little awareness that Skinner's (1957) Verbal Behavior provides an extensive, principled pragmatic analysis of language. The implications of Skinner's functional analysis for NLP and for verbal aspects of epistemology lead to a proposal for a "user expert"-a computer system whose area of expertise is the long-term computer user. The evolutionary nature of behavior suggests an AI technology known as genetic algorithms/programming for implementing such a system.
The Analysis of verbal behavior, 1992 · doi:10.1007/BF03392880