Starts in:

By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts

Frequently Asked Questions About Using Generative AI in Behavior-Analytic Practice

Questions Covered
  1. Can I use AI to write behavior intervention plans?
  2. Is it an ethical violation to use AI for clinical documentation?
  3. How do I protect client confidentiality when using AI tools?
  4. Can AI replace the need for functional behavior assessment?
  5. What are the most common ethical pitfalls when behavior analysts use AI?
  6. How does generative AI actually produce its output?
  7. Should I disclose to clients when I use AI in my practice?
  8. Can AI help with data analysis in behavior-analytic practice?
  9. How can I learn to use AI tools more effectively?
  10. What should I do if my organization has no policy on AI use?

1. Can I use AI to write behavior intervention plans?

You can use AI as a drafting aid for behavior intervention plans, but you must not rely on AI to make clinical decisions. AI-generated plans should be treated as rough drafts that require substantial review and editing based on your clinical assessment of the specific client. Verify that all recommendations are appropriate for the individual, consistent with the assessment data, and supported by the evidence base. The behavior analyst, not the AI, is responsible for the clinical quality and appropriateness of the plan.

2. Is it an ethical violation to use AI for clinical documentation?

Using AI as a documentation aid is not inherently an ethical violation, but it can become one if the documentation is not reviewed for accuracy or if client confidentiality is compromised. Every piece of clinical documentation must accurately reflect the services provided and the practitioner's professional judgment. AI-generated documentation that includes fabricated information, describes services not rendered, or contains inappropriate recommendations would violate documentation and truthfulness standards. Always review and edit AI-generated documentation before finalizing it.

3. How do I protect client confidentiality when using AI tools?

Never input client-identifying information into AI tools that transmit data to external servers without ensuring adequate data protection. De-identify all client information before using cloud-based AI tools. Review the privacy policies and terms of service of any AI tool to understand how your data is handled. Consider using locally hosted AI models for sensitive clinical work. When in doubt, err on the side of not sharing client information with AI systems. The behavior analyst is responsible for maintaining confidentiality regardless of what tools are used.

4. Can AI replace the need for functional behavior assessment?

No. Functional behavior assessment requires direct observation, environmental analysis, and clinical judgment that AI cannot provide. AI can assist with reviewing relevant literature or generating interview questions, but it cannot observe behavior in context, identify maintaining variables, or make clinical determinations about function. Using AI to generate a functional assessment summary without conducting the actual assessment would be a serious ethical violation. The assessment must be conducted by a qualified behavior analyst using established methods.

5. What are the most common ethical pitfalls when behavior analysts use AI?

The most common pitfalls include: accepting AI-generated content without adequate review, resulting in inaccurate clinical documentation; inputting client-identifying information into AI tools without adequate data protection; representing AI-generated work as one's own original analysis; using AI-generated citations or references without verifying their accuracy (AI frequently fabricates realistic-looking but non-existent references); and using AI as a shortcut that undermines the thoroughness of clinical assessment and decision-making.

6. How does generative AI actually produce its output?

Generative AI models, particularly large language models, produce output through statistical prediction. The model has been trained on vast amounts of text data and has learned patterns in how words, sentences, and concepts relate to each other. Given an input prompt, the model predicts the most likely sequence of text to follow, one token at a time. The output sounds fluent and knowledgeable because the training data includes fluent, knowledgeable text. However, the model does not understand the content it produces and can generate confident-sounding output that is factually wrong.

7. Should I disclose to clients when I use AI in my practice?

Transparency about AI use is an emerging professional norm and aligns with the ethical principles of truthfulness and informed consent. At minimum, clients should be informed about the nature of any AI tools used in their care and the safeguards in place to protect their information. The specifics of disclosure may evolve as professional organizations develop formal guidelines. When in doubt, err on the side of transparency. Disclosing AI use demonstrates professional integrity and helps build trust with clients.

8. Can AI help with data analysis in behavior-analytic practice?

AI can assist with certain aspects of data analysis, such as creating visual displays, performing statistical computations, or identifying patterns in large datasets. However, the behavior analyst must understand the analysis being performed and be able to evaluate whether the results are valid. AI can produce convincing-looking but methodologically flawed analyses. Use AI as a computational aid, not as a substitute for your own understanding of the data. Visual analysis of single-case data, the cornerstone of behavior-analytic decision-making, still requires the trained eye of the practitioner.

9. How can I learn to use AI tools more effectively?

Effective AI use is a skill that improves with practice. Start by learning to write clear, specific prompts that guide the AI toward useful output. Experiment with different prompting strategies and evaluate the results critically. Stay current with developments in AI technology through professional continuing education, technology publications, and peer discussion. Join professional communities where behavior analysts discuss AI applications. Most importantly, approach AI with the same data-based, skeptical mindset you apply to any clinical tool.

10. What should I do if my organization has no policy on AI use?

If your organization lacks an AI policy, take the initiative to raise the issue with leadership. In the meantime, develop and follow a personal policy based on the ethical principles outlined in the BACB Ethics Code. Prioritize client confidentiality, documentation accuracy, and professional integrity. Document your AI use practices so that you can demonstrate responsible use if questions arise. Consider proposing a formal organizational policy that addresses data privacy, quality assurance, disclosure, and the specific clinical applications for which AI may or may not be used.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.

Sometimes the Question IS Whether Machines Think: An AI Tutorial for Behavior Analysts — Kaitlynn Gokey · 1 BACB Ethics CEUs · $20

Take This Course →
📚 Browse All 60+ Free CEUs — ethics, supervision & clinical topics in The ABA Clubhouse

Related Topics

CEU Course: Sometimes the Question IS Whether Machines Think: An AI Tutorial for Behavior Analysts

1 BACB Ethics CEUs · $20 · BehaviorLive

Guide: Sometimes the Question IS Whether Machines Think: An AI Tutorial for Behavior Analysts — What Every BCBA Needs to Know

Research-backed educational guide with practice recommendations

Decision Guide: Comparing Approaches

Side-by-side comparison with clinical decision framework

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics