These answers draw in part from “A Developing Framework for Ethical Use of Artificial Intelligence in Behavior Analytic Practice” by Mahin Para-Cremer, M.Ed., BCBA, LBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →The primary risks include compromised truthfulness when AI-generated content is presented without disclosure, reduced accountability when practitioners defer to AI output without critical evaluation, confidentiality violations when client data is processed through AI platforms with inadequate data protections, and threats to client welfare when AI-generated recommendations are inaccurate or not individualized. Additional risks include competence concerns when practitioners use AI tools they do not understand well enough to evaluate, and erosion of clinical skills when AI automation reduces the practitioner's direct engagement with clinical reasoning. Each of these risks can be mitigated through deliberate policies and practices, but they cannot be ignored.
The Ethics Code's informed consent and transparency requirements strongly support disclosing AI use when it plays a meaningful role in assessment, treatment planning, or documentation. Code 2.11 requires informing clients about factors that may affect service delivery. While the current Ethics Code does not specifically mention AI, the principle of transparency applies. If AI is generating content that influences clinical decisions about a client's care, that client has a reasonable interest in knowing. The Consortium's framework likely provides more specific guidance on the form and timing of disclosure. As a general principle, err on the side of transparency — failing to disclose when disclosure was warranted damages trust far more than proactive disclosure ever does.
You can use AI as a drafting tool, but the final product must reflect your independent clinical judgment and be fully reviewed for accuracy, individualization, and evidence-based content. A behavior intervention plan generated by AI may use correct terminology and follow standard formatting but contain recommendations that are not based on the specific functional assessment data for that client, include strategies that are not appropriate for the client's age or context, or omit important individualized considerations. You must be competent to identify and correct all of these issues. If the AI-drafted plan saves time because you are editing rather than writing from scratch, that is a reasonable use. If the AI plan is being submitted with only superficial review, that is an ethical problem.
AI-assisted session notes carry specific risks around truthfulness and accuracy. AI may generate plausible-sounding content that does not accurately reflect what occurred during the session — adding details that did not happen, omitting clinically important observations, or characterizing interactions in ways that do not match your direct experience. Every AI-generated note must be reviewed line by line against your actual clinical observations before signing. Additionally, consider the data privacy implications: if you are entering session details into an AI platform, you need to know where that data is stored, who has access, and whether it could be exposed. The convenience of AI-assisted documentation does not justify compromising client confidentiality.
AI can significantly complicate supervision if not managed carefully. If a supervisee uses AI to prepare case conceptualizations, analyze data, or develop treatment plans, the supervisor may receive products that reflect the AI's capabilities rather than the supervisee's actual competence. This makes it difficult for the supervisor to accurately assess the supervisee's skills and provide appropriate feedback. Supervisors should establish clear expectations about AI use in supervisory tasks, ask supervisees to demonstrate their reasoning process rather than just presenting finished products, and consider the implications for competency assessment. The goal of supervision is to develop the supervisee's clinical repertoire, which requires the supervisor to see the supervisee's authentic work.
The Consortium is an organized group of behavior analysis professionals that has developed an ethical framework specifically for AI use in applied behavior analysis. The framework addresses core ethical issues including truthfulness, accountability, transparency, and client welfare as they apply to AI integration. The Consortium's work represents the first formal effort within the profession to provide practitioners and organizations with structured guidance for navigating the ethical challenges that AI creates. Mahin Para-Cremer presents the Consortium's framework in this course, making it accessible to practitioners who need practical guidance for their current use of AI tools. The framework is designed to complement existing Ethics Code requirements rather than replace them.
When you enter client information into an AI platform, that data may be stored on external servers, used to train the AI model, accessible to the platform's employees, or vulnerable to data breaches. Code 2.04 requires behavior analysts to protect client confidentiality, and this obligation extends to digital systems. Before using any AI tool with client data, investigate the platform's data handling policies: Does it store the data you enter? Is data used for model training? Can data be deleted upon request? Is the platform compliant with relevant healthcare data protection standards? Consider using AI tools only with de-identified data, or using platforms that offer enterprise-grade privacy protections specifically designed for healthcare applications.
Organizations should develop written AI use policies that specify which AI tools are approved for use, which tasks they may be used for, what level of human review is required for AI-generated content, how AI use will be disclosed to clients, what data privacy protections are required, and how compliance will be monitored. These policies should be developed with input from clinical staff, reviewed by someone with expertise in both AI and behavior analytic ethics, and updated regularly as technology and professional guidance evolve. Training should accompany the policy so that all staff understand not just the rules but the reasoning behind them. Organizations should also monitor the quality of AI-assisted work products to ensure that standards are being maintained.
AI is unlikely to replace the core functions of behavior analysts — clinical judgment, therapeutic relationships, individualized assessment, and ethical decision-making — because these functions require contextual understanding, empathy, and professional accountability that AI cannot provide. However, AI will likely change how behavior analysts work by automating routine tasks, providing decision support, and handling administrative functions. The practitioners who will thrive in an AI-augmented field are those who develop the competence to use AI tools ethically and effectively while maintaining the clinical and interpersonal skills that AI cannot replicate. The risk is not that AI will replace behavior analysts but that behavior analysts who use AI uncritically may provide lower-quality care than those who maintain rigorous professional standards.
Take immediate action to assess the scope of the problem. Review past work products that were generated or assisted by the AI tool to identify any errors that may have been incorporated into clinical documents, treatment plans, or other materials. Correct any errors found and, if the errors affected clinical decisions, consider whether additional corrective action is needed for affected clients. Discontinue use of the tool for the purpose where it proved unreliable. Document what you found and what corrective actions you took. If the unreliable output was shared with clients, families, or other stakeholders, consider whether disclosure of the error is warranted. Use the experience to refine your AI evaluation processes so that similar problems are caught earlier in the future.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
A Developing Framework for Ethical Use of Artificial Intelligence in Behavior Analytic Practice — Mahin Para-Cremer · 1 BACB Ethics CEUs · $20
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
279 research articles with practitioner takeaways
258 research articles with practitioner takeaways
252 research articles with practitioner takeaways
1 BACB Ethics CEUs · $20 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.