An experimental analysis of gender‐biased verbal behavior and self‐editing using an online chat analog
An online chat replay can reveal hidden self-editing that gender cues trigger, giving BCBAs a new lens on social verbal behavior.
01Research in Context
What this study did
Oda’s team built a fake online chat room. Adults typed messages to partners they thought were male or female.
The software logged every keystroke, delete, and rewrite. It caught self-editing that never reached the partner.
No one had autism; the goal was to see if plain gender labels could shape what people choose to say.
What they found
Some people did erase more and wrote softer words to “females.” Others argued more with “males.”
Yet the stats showed no clear, across-the-board difference. The chat tool works, but the bias signal stayed weak.
How this fits with other research
Mates (1990) found female reviewers accepted female papers six times more often, a strong real-world bias. Oda’s lab task may have lacked the stakes that make bias bloom.
Critchfield (1996) showed people hide failure reports when given the chance. Oda extends this by proving we can track hidden edits keystroke by keystroke.
Rotta et al. (2022) charted women’s rising voice in behavior-analysis journals. Their review includes any new gender-verbal tool, so Oda’s chat analog now sits on that shelf.
Why it matters
You now have a cheap, ready-made way to let clients safely see their own social editing. Run the chat in a social-skills group, show the replay, and teach where they softened, argued, or hid their voice. The data may not scream “bias,” but the visual of deleted text opens eyes and gives you a concrete place to start self-advocacy drills.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Open a shared Google Doc, have two clients type a debate, then replay the version history to count deletions and discuss why they edited.
02At a glance
03Original abstract
The current study investigated the effects of female and male audiences on gender-biased verbal behavior and self-editing using an online chat environment analog. The chat analog allowed access to self-editing behaviors, which are frequently covert, thus providing additional information about verbal episodes. We examined whether the strength and the dimensions of verbal responses differentially varied across the female and male audience conditions using visual inspection and statistical analysis. Participants were 28 typically developing adults. Overt responses were recorded for interrupting, and both overt and covert responses were recorded for disagreeing, pressuring, and self-editing. Visual inspection revealed differentiated overt and covert disagreeing, pressuring, and interrupting for some participants, while statistical analysis using Fisher's exact test did not reveal significant differences in the dependent variables between audience's perceived gender and participants' gender. Differentiated responding between female and male audiences suggests that perceived gender can exert stimulus control over a speaker's behavior. Although we didn't observe consistent gender-biased responding for all the participants, our experimental evaluation functions as a proof-of-concept study that can encourage the use of this methodology to study complex social behavior.
Journal of the Experimental Analysis of Behavior, 2022 · doi:10.1002/jeab.763