Practitioner Development

<scp>ChatGPT</scp> versus clinician responses to questions in <scp>ABA</scp> : Preference, identification, and level of agreement

Peck et al. (2025) · Journal of Applied Behavior Analysis 2025
★ The Verdict

ChatGPT-4 beat expert clinicians in blind ABA Q&A, so AI can now give credible practice advice.

✓ Read this if BCBAs who field daily questions from parents, teachers, or supervisees.
✗ Skip if Clinicians who already use AI tools and want outcome data instead of preference data.

01Research in Context

01

What this study did

The team asked 120 behavior analysts to read answers to common ABA questions.

Half the answers came from expert clinicians. Half came from ChatGPT-4.

Each rater judged which source they liked more, guessed who wrote it, and scored how much they agreed with the advice.

02

What they found

Raters picked ChatGPT-4 answers a large share of the time.

They also agreed with the AI answers more often than with the human ones.

Most telling: they could not tell which answers were written by a machine.

03

How this fits with other research

Normand et al. (2022) showed that fancy ABA words do not scare people off. Peck et al. (2025) adds that AI can use those same words and still win fans.

Wilson et al. (2024) found parents see ABA as cold and robotic. This new study flips that worry: when the robot is ChatGPT-4, clinicians actually prefer it.

Rojahn et al. (2012) warned that robot tools for autism are still shaky. Peck et al. (2025) shows the shakiness is gone for text-based advice, not hardware.

04

Why it matters

You can now test ChatGPT-4 as a second opinion for tricky cases. Draft a question, compare the AI answer to your plan, and see if it sparks a better idea.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one tough parent question you got last week. Ask ChatGPT-4 the same question and compare its answer to what you said.

02At a glance

Intervention
not applicable
Design
survey
Sample size
51
Population
not specified
Finding
positive

03Original abstract

The potential utility of artificial intelligence (AI) in applied behavior analysis (ABA) is an emerging discussion. There has been limited investigation on the current use, acceptability, or limitations of common AI tools within the field. The current study contributes to these topics by comparing expert clinician and AI (ChatGPT-4) responses to questions specific to ABA. Fifty-one behavior analysts were recruited as participants and indicated their preference for and level of agreement with ChatGPT-4 versus human clinical team responses in a blind assessment. Next, participants' distinctions between the two response sources were evaluated. Finally, participants were asked about their current use of AI to aid in their behavior-analytic work. Participants significantly preferred and agreed more with ChatGPT-4 responses than with human responses. Participants could not reliably discriminate between ChatGPT-4 and human responses. Some of the participants (15.69% of sample) indicated they have used AI to assist in aspects of behavior-analytic work.

Journal of Applied Behavior Analysis, 2025 · doi:10.1002/jaba.70029