Starts in:

ChatGPT for BCBAs: Practical Use Cases, Ethical Boundaries, and Responsible AI Integration

Source & Transformation

This guide draws in part from “ChatGPT: Practical Use Cases for the Everyday BCBA” by Mellanie Page (BehaviorLive), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
In This Guide
  1. Overview & Clinical Significance
  2. Background & Context
  3. Clinical Implications
  4. Ethical Considerations
  5. Assessment & Decision-Making
  6. What This Means for Your Practice

Overview & Clinical Significance

Generative artificial intelligence tools like ChatGPT have rapidly entered professional workflows across healthcare, education, and human services, and applied behavior analysis is no exception. BCBAs face expanding professional demands that extend well beyond direct clinical work, encompassing documentation, supervision preparation, caregiver training materials, data interpretation narratives, and organizational tasks. The question is no longer whether AI tools will be used in ABA practice but how they can be used responsibly, effectively, and ethically.

The clinical significance of this topic lies in its dual nature: AI tools offer genuine potential to reduce administrative burden and improve the efficiency of non-clinical tasks, while simultaneously introducing risks related to data privacy, clinical accuracy, professional competence, and ethical practice. A BCBA who uses ChatGPT to draft a parent-friendly summary of assessment findings may save valuable time that can be redirected to direct clinical activities. However, that same BCBA must ensure that the AI-generated content is clinically accurate, that no protected health information was entered into the AI system, and that the final product reflects their own professional judgment rather than an uncritical adoption of AI-generated text.

Mellanie Page's workshop addresses this tension by providing practical guidance grounded in real-world BCBA workflows. Rather than discussing AI in abstract terms, the course focuses on specific use cases: planning supervision meetings, streamlining caregiver communication, building training resources, and enhancing visual supports. This specificity is valuable because it helps practitioners understand exactly where AI can add value and where its limitations require human oversight.

The issue of burnout is particularly relevant. BCBAs consistently report that administrative demands consume a disproportionate amount of their professional time, reducing their availability for the clinical activities that require their specialized expertise and that they find most professionally fulfilling. If AI tools can genuinely reduce the time spent on tasks like drafting templates, organizing supervision agendas, creating training materials, and generating first drafts of reports, the resulting time savings could meaningfully improve both practitioner wellbeing and client access to direct clinical services.

However, the enthusiasm for AI-driven efficiency must be tempered by careful attention to what AI tools cannot do. They cannot exercise clinical judgment, maintain confidentiality without deliberate safeguards by the user, produce reliably accurate technical content without expert review, or replace the nuanced professional reasoning that defines competent BCBA practice. The clinical significance of this course lies in helping practitioners navigate between these opportunities and limitations.

Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

Background & Context

The emergence of large language models like ChatGPT represents a paradigm shift in how professionals interact with technology. Unlike previous software tools that performed specific, predefined functions (spreadsheet calculations, database queries, word processing), generative AI produces novel text based on patterns learned from vast training datasets. This capability makes it remarkably versatile but also fundamentally different from tools that behavior analysts are accustomed to using.

The context for AI use in ABA practice includes several factors. First, the field has experienced rapid growth in demand for services, particularly in autism-related ABA, without commensurate growth in the workforce. This supply-demand mismatch creates pressure on existing practitioners to serve more clients while maintaining quality, making efficiency tools particularly attractive.

Second, documentation requirements have increased as insurance coverage for ABA has expanded. BCBAs now spend significant time producing authorization requests, progress reports, treatment plans, and session notes that must satisfy both clinical standards and managed care requirements. AI tools that can assist with first drafts, formatting, or template creation for these documents address a real pain point in daily practice.

Third, the supervision demands on BCBAs have grown as the number of RBTs and trainees has expanded. Preparing individualized supervision agendas, creating training materials, and developing feedback summaries all consume time that AI tools might help reduce.

Fourth, the technology itself has matured rapidly. Early language models produced text that was frequently inaccurate or incoherent. Current models produce text that is often impressively fluent and occasionally contains subtle errors that require expert review to detect. This improvement makes the tools more useful but also more dangerous, because the apparent quality of the output can create false confidence in its accuracy.

The regulatory and ethical landscape for AI in healthcare is evolving but remains largely uncodified. The BACB has not issued specific guidance on the use of AI tools in ABA practice, meaning that practitioners must apply existing ethical principles to a novel technology. HIPAA regulations governing protected health information apply to AI tools just as they apply to any other data processing system, meaning that entering client information into a commercial AI platform raises significant privacy concerns.

The use of AI in professional practice also raises questions about professional identity and competence. If a BCBA uses AI to generate the first draft of a behavior intervention plan, whose professional judgment does that plan reflect? The answer depends entirely on the degree to which the BCBA reviews, modifies, and takes responsibility for the final product. AI-generated content that is adopted without critical review does not reflect the BCBA's professional judgment, regardless of whose name appears on the document.

Clinical Implications

The practical integration of AI tools into BCBA workflows has clinical implications that depend heavily on how the tools are used. Responsible use can enhance efficiency and quality. Irresponsible use can compromise accuracy, confidentiality, and professional competence.

For supervision planning, AI can assist by generating structured agendas based on the supervisee's current goals, suggesting discussion topics related to specific task list items, creating case study scenarios for skill development, and drafting competency evaluation templates. The clinical benefit is that supervision sessions become more intentional and comprehensive. The clinical risk is minimal if the BCBA reviews the generated content against their knowledge of the supervisee's actual needs and modifies it accordingly.

For caregiver communication, AI can help draft parent-friendly explanations of assessment findings, create behavior plan summaries in accessible language, develop FAQ documents for common caregiver questions, and generate home programming guides. The clinical benefit is that caregivers receive clearer, more consistent communication, which supports treatment integrity at home. The clinical risk is that AI-generated content may contain inaccuracies or may not adequately reflect the specific client's situation. Every AI-generated communication must be reviewed by the BCBA for clinical accuracy and individualization before being shared with caregivers.

For training materials, AI can assist with creating RBT training presentations, developing competency assessment rubrics, generating role-play scenarios for behavioral skills training, and drafting procedural guides for specific interventions. The clinical benefit is that training materials are produced more quickly, allowing BCBAs to devote more time to the actual delivery of training. The clinical risk is that AI-generated training content may contain errors in behavioral terminology or procedural descriptions that could lead to implementation errors if not caught and corrected.

For documentation, AI can help with structuring progress notes, generating first drafts of treatment plan sections, formatting data summaries for reports, and creating template language for common documentation needs. The clinical benefit is reduced documentation time and more consistent formatting. The clinical risk is significant: AI-generated clinical documentation may contain fabricated details, incorrect technical language, or generic content that does not reflect the specific client's situation. Clinical documentation must always represent the BCBA's actual observations and professional judgment.

The most critical clinical implication is the absolute necessity of human review. AI tools are pattern-matching systems that produce plausible-sounding text, not reasoning systems that understand behavior analysis. They can and do generate content that sounds authoritative but is clinically incorrect. A BCBA who uses AI-generated content without thorough review is outsourcing their professional judgment to a system that has no clinical competence, which is both clinically dangerous and ethically problematic.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Ethical Considerations

The ethical use of AI tools in ABA practice is governed by existing standards in the BACB Ethics Code (2022) that apply to all professional activities, even those involving novel technology. Several standards are particularly relevant.

Code 2.03 (Confidentiality) is perhaps the most immediately critical. Commercial AI platforms like ChatGPT process user inputs on external servers and, depending on the platform's privacy policies and settings, may retain or use that data for model training. Entering any protected health information (client names, dates of birth, diagnostic information, behavioral data, or any combination that could identify an individual) into a commercial AI platform constitutes a potential breach of confidentiality. BCBAs must use de-identified or hypothetical information when interacting with AI tools, or use enterprise-grade AI platforms that offer appropriate data protection agreements.

Code 2.01 (Providing Effective Treatment) requires that all services, including documentation and communication, reflect competent professional practice. When AI is used to generate clinical content, the BCBA remains responsible for the accuracy and appropriateness of that content. Using AI-generated treatment plans, assessment reports, or progress notes without thorough review and modification would not meet this standard because the AI has no understanding of the individual client's needs.

Code 1.04 (Integrity) requires honesty and transparency in professional activities. There is an emerging question about whether BCBAs should disclose to clients, organizations, or insurance companies when AI has been used to assist with clinical documentation. While there is no specific standard requiring such disclosure currently, the spirit of integrity suggests that transparency about AI involvement in the clinical process is appropriate, particularly if the AI's contribution is substantial.

Code 1.05 (Practicing Within One's Scope of Competence) has an important corollary here: BCBAs must be competent to evaluate the accuracy of AI-generated content in their area of practice. If a BCBA uses AI to generate content about a topic they do not fully understand (for example, generating a functional analysis protocol for a population they have not served), they may be unable to detect errors in the AI output, effectively allowing the AI to practice beyond the BCBA's competence.

Code 2.13 (Accuracy in Billing and Reporting) requires that all representations to third parties be accurate. AI-generated content that is submitted to insurance companies, regulatory bodies, or other entities must accurately reflect the client's actual situation. Generic or fabricated content, even if generated by AI and superficially plausible, does not meet this standard.

Code 4.06 (Providing Supervision and Training) is relevant when BCBAs use AI to create supervision or training materials. The supervisor remains responsible for the quality and accuracy of those materials. Using AI-generated training content that contains errors and then delivering that training to supervisees would constitute a failure of supervisory responsibility.

A broader ethical consideration involves the question of where the line falls between using a tool to enhance productivity and using a tool to avoid doing one's job. AI that helps a BCBA organize their thoughts, structure a document, or brainstorm ideas is a productivity tool. AI that generates clinical documents that the BCBA signs without meaningful contribution or review is a substitute for professional practice, which raises fundamental questions about competence and integrity.

Assessment & Decision-Making

Deciding how to integrate AI tools into your practice requires a systematic assessment of your current workflow, the specific tasks where AI might add value, the risks associated with each potential use, and the safeguards needed to mitigate those risks.

Begin by mapping your weekly time allocation across different professional activities. Identify the tasks that consume the most time, that are most amenable to AI assistance, and that carry the lowest clinical risk if AI-generated content contains errors. Administrative and organizational tasks (scheduling, agenda creation, template development) are generally low-risk and high-value targets for AI assistance. Clinical documentation and treatment planning are higher-risk applications that require more rigorous human oversight.

For each potential AI application, conduct a risk assessment that considers three factors. First, what is the consequence if the AI generates inaccurate content? For a supervision agenda, the consequence is minimal because the BCBA will be present to adjust the agenda in real time. For a treatment plan, the consequence could be significant if inaccurate content guides clinical decisions. Second, can you reliably detect errors in the AI's output? If the task involves your core area of expertise, your ability to detect errors is high. If it involves peripheral knowledge, your error detection may be less reliable. Third, will protected information need to be included in the prompt? If so, additional safeguards are required.

Develop a personal AI use policy that specifies which tasks you will and will not use AI for, what privacy safeguards you will implement, what review process you will follow for all AI-generated content, and how you will document your use of AI tools. This policy should be informed by your organization's policies (if they exist) and by the ethical standards discussed above.

When crafting prompts for AI tools, several strategies improve the quality and safety of the output. Use de-identified or hypothetical scenarios rather than real client information. Provide context about your role and audience ("I am a BCBA creating a parent-friendly handout about..."). Be specific about what you need ("generate a structured agenda for a supervision meeting focusing on...") rather than vague ("help me with supervision"). Request output in a specific format ("provide this as a bulleted list with..."). Always specify that the output should not include citations, references, or claims about specific research unless you can verify them, as AI tools frequently fabricate academic references.

Assess the quality of each AI output against your professional knowledge before using it. Read every word with a critical eye. Modify content to reflect your actual professional judgment and the specific situation. Add clinical detail that only you can provide. Remove generic or inaccurate content. The final product should represent your professional work enhanced by AI efficiency, not AI work approved by your signature.

What This Means for Your Practice

Start with low-risk, high-value applications. Use AI to generate supervision meeting agendas, create template structures for common documents, brainstorm ideas for caregiver training topics, or draft parent-friendly explanations of behavioral concepts. As you develop confidence in managing AI's limitations, you can gradually expand to more complex applications while maintaining rigorous review processes.

Establish an absolute rule about confidentiality: never enter identifiable client information into a commercial AI platform. Develop a habit of creating de-identified or hypothetical versions of scenarios before using AI. If you need AI assistance with a specific clinical situation, describe the situation in general terms without any identifying details.

Build a review habit. Every piece of AI-generated content should pass through your professional filter before it reaches anyone else. Read it carefully, check technical accuracy, ensure it reflects the specific situation (not just generic best practices), and modify it until it represents your professional judgment. If you cannot improve upon the AI's output, consider whether you fully understand the content well enough to take professional responsibility for it.

Stay current with developments in both AI technology and professional guidance. The capabilities and limitations of AI tools are changing rapidly, and the BACB and other professional organizations may issue specific guidance on AI use in the future. Position yourself as an informed, responsible adopter of technology rather than either an uncritical enthusiast or a reflexive skeptic.

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.

ChatGPT: Practical Use Cases for the Everyday BCBA — Mellanie Page · 1.5 BACB Ethics CEUs · $14.99

Take This Course →

Research Explore the Evidence

We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Social Cognition and Coherence Testing

280 research articles with practitioner takeaways

View Research →

Measurement and Evidence Quality

279 research articles with practitioner takeaways

View Research →

Symptom Screening and Profile Matching

258 research articles with practitioner takeaways

View Research →
CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics