Starts in:

Ethical Use of AI in Behavior Analytic Practice: Frequently Asked Questions

Source & Transformation

These answers draw in part from “A Developing Framework for Ethical Use of Artificial Intelligence in Behavior Analytic Practice” by Mahin Para-Cremer, M.Ed., BCBA, LBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
Questions Covered
  1. What are the main ethical risks of using AI in behavior analytic practice?
  2. Am I required to tell clients when I use AI in their care?
  3. Can I use AI to write behavior intervention plans?
  4. What about using AI for session note documentation?
  5. How does AI affect the supervisory relationship?
  6. What is the Consortium for Ethical AI in ABA?
  7. What data privacy concerns should I have about AI tools?
  8. How can organizations develop ethical AI policies?
  9. Will AI replace behavior analysts?
  10. What should I do if I discover that an AI tool I have been using produces unreliable output?
Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

1. What are the main ethical risks of using AI in behavior analytic practice?

The primary risks include compromised truthfulness when AI-generated content is presented without disclosure, reduced accountability when practitioners defer to AI output without critical evaluation, confidentiality violations when client data is processed through AI platforms with inadequate data protections, and threats to client welfare when AI-generated recommendations are inaccurate or not individualized. Additional risks include competence concerns when practitioners use AI tools they do not understand well enough to evaluate, and erosion of clinical skills when AI automation reduces the practitioner's direct engagement with clinical reasoning. Each of these risks can be mitigated through deliberate policies and practices, but they cannot be ignored.

2. Am I required to tell clients when I use AI in their care?

The Ethics Code's informed consent and transparency requirements strongly support disclosing AI use when it plays a meaningful role in assessment, treatment planning, or documentation. Code 2.11 requires informing clients about factors that may affect service delivery. While the current Ethics Code does not specifically mention AI, the principle of transparency applies. If AI is generating content that influences clinical decisions about a client's care, that client has a reasonable interest in knowing. The Consortium's framework likely provides more specific guidance on the form and timing of disclosure. As a general principle, err on the side of transparency — failing to disclose when disclosure was warranted damages trust far more than proactive disclosure ever does.

3. Can I use AI to write behavior intervention plans?

You can use AI as a drafting tool, but the final product must reflect your independent clinical judgment and be fully reviewed for accuracy, individualization, and evidence-based content. A behavior intervention plan generated by AI may use correct terminology and follow standard formatting but contain recommendations that are not based on the specific functional assessment data for that client, include strategies that are not appropriate for the client's age or context, or omit important individualized considerations. You must be competent to identify and correct all of these issues. If the AI-drafted plan saves time because you are editing rather than writing from scratch, that is a reasonable use. If the AI plan is being submitted with only superficial review, that is an ethical problem.

4. What about using AI for session note documentation?

AI-assisted session notes carry specific risks around truthfulness and accuracy. AI may generate plausible-sounding content that does not accurately reflect what occurred during the session — adding details that did not happen, omitting clinically important observations, or characterizing interactions in ways that do not match your direct experience. Every AI-generated note must be reviewed line by line against your actual clinical observations before signing. Additionally, consider the data privacy implications: if you are entering session details into an AI platform, you need to know where that data is stored, who has access, and whether it could be exposed. The convenience of AI-assisted documentation does not justify compromising client confidentiality.

5. How does AI affect the supervisory relationship?

AI can significantly complicate supervision if not managed carefully. If a supervisee uses AI to prepare case conceptualizations, analyze data, or develop treatment plans, the supervisor may receive products that reflect the AI's capabilities rather than the supervisee's actual competence. This makes it difficult for the supervisor to accurately assess the supervisee's skills and provide appropriate feedback. Supervisors should establish clear expectations about AI use in supervisory tasks, ask supervisees to demonstrate their reasoning process rather than just presenting finished products, and consider the implications for competency assessment. The goal of supervision is to develop the supervisee's clinical repertoire, which requires the supervisor to see the supervisee's authentic work.

6. What is the Consortium for Ethical AI in ABA?

The Consortium is an organized group of behavior analysis professionals that has developed an ethical framework specifically for AI use in applied behavior analysis. The framework addresses core ethical issues including truthfulness, accountability, transparency, and client welfare as they apply to AI integration. The Consortium's work represents the first formal effort within the profession to provide practitioners and organizations with structured guidance for navigating the ethical challenges that AI creates. Mahin Para-Cremer presents the Consortium's framework in this course, making it accessible to practitioners who need practical guidance for their current use of AI tools. The framework is designed to complement existing Ethics Code requirements rather than replace them.

7. What data privacy concerns should I have about AI tools?

When you enter client information into an AI platform, that data may be stored on external servers, used to train the AI model, accessible to the platform's employees, or vulnerable to data breaches. Code 2.04 requires behavior analysts to protect client confidentiality, and this obligation extends to digital systems. Before using any AI tool with client data, investigate the platform's data handling policies: Does it store the data you enter? Is data used for model training? Can data be deleted upon request? Is the platform compliant with relevant healthcare data protection standards? Consider using AI tools only with de-identified data, or using platforms that offer enterprise-grade privacy protections specifically designed for healthcare applications.

8. How can organizations develop ethical AI policies?

Organizations should develop written AI use policies that specify which AI tools are approved for use, which tasks they may be used for, what level of human review is required for AI-generated content, how AI use will be disclosed to clients, what data privacy protections are required, and how compliance will be monitored. These policies should be developed with input from clinical staff, reviewed by someone with expertise in both AI and behavior analytic ethics, and updated regularly as technology and professional guidance evolve. Training should accompany the policy so that all staff understand not just the rules but the reasoning behind them. Organizations should also monitor the quality of AI-assisted work products to ensure that standards are being maintained.

9. Will AI replace behavior analysts?

AI is unlikely to replace the core functions of behavior analysts — clinical judgment, therapeutic relationships, individualized assessment, and ethical decision-making — because these functions require contextual understanding, empathy, and professional accountability that AI cannot provide. However, AI will likely change how behavior analysts work by automating routine tasks, providing decision support, and handling administrative functions. The practitioners who will thrive in an AI-augmented field are those who develop the competence to use AI tools ethically and effectively while maintaining the clinical and interpersonal skills that AI cannot replicate. The risk is not that AI will replace behavior analysts but that behavior analysts who use AI uncritically may provide lower-quality care than those who maintain rigorous professional standards.

10. What should I do if I discover that an AI tool I have been using produces unreliable output?

Take immediate action to assess the scope of the problem. Review past work products that were generated or assisted by the AI tool to identify any errors that may have been incorporated into clinical documents, treatment plans, or other materials. Correct any errors found and, if the errors affected clinical decisions, consider whether additional corrective action is needed for affected clients. Discontinue use of the tool for the purpose where it proved unreliable. Document what you found and what corrective actions you took. If the unreliable output was shared with clients, families, or other stakeholders, consider whether disclosure of the error is warranted. Use the experience to refine your AI evaluation processes so that similar problems are caught earlier in the future.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.

A Developing Framework for Ethical Use of Artificial Intelligence in Behavior Analytic Practice — Mahin Para-Cremer · 1 BACB Ethics CEUs · $20

Take This Course →
📚 Browse All 60+ Free CEUs — ethics, supervision & clinical topics in The ABA Clubhouse

Research Explore the Evidence

We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Measurement and Evidence Quality

279 research articles with practitioner takeaways

View Research →

Symptom Screening and Profile Matching

258 research articles with practitioner takeaways

View Research →

Brief Behavior Assessment and Treatment Matching

252 research articles with practitioner takeaways

View Research →

Related Topics

CEU Course: A Developing Framework for Ethical Use of Artificial Intelligence in Behavior Analytic Practice

1 BACB Ethics CEUs · $20 · BehaviorLive

Guide: A Developing Framework for Ethical Use of Artificial Intelligence in Behavior Analytic Practice — What Every BCBA Needs to Know

Research-backed educational guide with practice recommendations

Decision Guide: Comparing Approaches

Side-by-side comparison with clinical decision framework

CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics