Practitioner Development

Starting the Conversation Around the Ethical Use of Artificial Intelligence in Applied Behavior Analysis

Jennings et al. (2024) · Behavior Analysis in Practice 2024
★ The Verdict

The BACB Code doesn’t cover AI—start discussing ethical guardrails now before AI tools become commonplace in ABA practice.

✓ Read this if BCBAs who plan to buy, build, or bill for any AI-assisted service.
✗ Skip if Practitioners in settings that ban all digital tools.

01Research in Context

01

What this study did

Jennings et al. (2024) wrote a position paper. They asked: where does the BACB Ethics Code fall short on artificial intelligence?

The authors scanned every code item. They listed places where AI tools could clash with current rules. They ended with a call to start talking now, before AI becomes everyday gear.

02

What they found

The Code is silent on data privacy, algorithm bias, and who is responsible when software makes choices. These gaps could put clients and BCBAs at risk.

The paper gives a starter list of questions teams can ask before buying or building any AI product.

03

How this fits with other research

Cox et al. (2024) paint the upside. Their review shows AI can help at every step of service delivery. Jennings adds the guardrails that keep that same AI safe and fair.

Yagafarova et al. (2025) give real data. Their single-case study shows an AI coach raised procedural fidelity for most novice therapists. Jennings supplies the ethical checklist you should run before you switch that coach on.

Contreras et al. (2022) argued that evidence-based practice should guide all ethical choices. Jennings extends that idea to new tech: use data, client values, and clinical skill to judge any AI tool.

04

Why it matters

AI sales reps are already knocking. If your organization lacks an AI policy, you are the policy. Use the questions in Jennings et al. to write a one-page stance. Cover data storage, consent, bias checks, and human override. Add it to your staff handbook this month.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Draft a three-item AI ethics screen: Who owns the data? How will we test for bias? Who signs off on final decisions?

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Artificial intelligence (AI) is increasingly a part of our everyday lives. Though much AI work in healthcare has been outside of applied behavior analysis (ABA), researchers within ABA have begun to demonstrate many different ways that AI might improve the delivery of ABA services. Though AI offers many exciting advances, absent from the behavior analytic literature thus far is conversation around ethical considerations when developing, building, and deploying AI technologies. Further, though AI is already in the process of coming to ABA, it is unknown the extent to which behavior analytic practitioners are familiar (and comfortable) with the use of AI in ABA. The purpose of this article is twofold. First, to describe how existing ethical publications (e.g., BACB Code of Ethics) do and do not speak to the unique ethical concerns with deploying AI in everyday, ABA service delivery settings. Second, to raise questions for consideration that might inform future ethical guidelines when developing and using AI in ABA service delivery. In total, we hope this article sparks proactive dialog around the ethical use of AI in ABA before the field is required to have a reactionary conversation.

Behavior Analysis in Practice, 2024 · doi:10.1007/s40617-023-00868-z