Starts in:

A Comprehensive Guide to Ethical AI Prompting for Behavior Analysts Using the D.A.N.C.E. Method

Source & Transformation

This guide draws in part from “Prompting with ABA: The D.A.N.C.E.™ Method” (Do Better Collective), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
In This Guide
  1. Overview & Clinical Significance
  2. Background & Context
  3. Clinical Implications
  4. Ethical Considerations
  5. Assessment & Decision-Making
  6. What This Means for Your Practice

Overview & Clinical Significance

Artificial intelligence tools, particularly large language models like ChatGPT, have rapidly entered the professional landscape of behavior analysis, offering unprecedented capabilities for generating clinical materials, supporting decision-making, developing parent training resources, and streamlining administrative tasks. However, the integration of AI into behavior analytic practice raises significant questions about ethical use, professional responsibility, accuracy, bias, and the appropriate role of automated tools in clinical decision-making.

The D.A.N.C.E. Method represents a structured, values-based approach to AI prompting designed specifically for behavior analysts. Rather than treating AI as a tool to be used without critical examination, the D.A.N.C.E. Method provides a framework for iterative interaction with AI that maintains the behavior analyst's professional judgment, ethical obligations, and clinical expertise at the center of the process.

The clinical significance of this topic is substantial and growing. Behavior analysts are already using AI tools in their practice, whether or not they have received formal guidance on doing so. Without a structured framework, practitioners may over-rely on AI-generated content without adequate critical evaluation, use AI in ways that compromise client confidentiality, accept AI outputs that contain inaccuracies or biases, or fail to recognize the limitations of AI tools in clinical contexts.

The risks of unstructured AI use in behavior analysis are real. AI language models can generate plausible-sounding but factually incorrect information, a phenomenon known as hallucination. They can produce content that reflects systematic biases present in their training data. They cannot access current research, verify clinical facts, or account for the specific context of an individual client's situation. And they can create a false sense of efficiency that leads practitioners to skip critical thinking steps that are essential for ethical, effective practice.

At the same time, the potential benefits of thoughtful AI integration are significant. AI can serve as a brainstorming partner, helping behavior analysts generate ideas for intervention strategies, assessment approaches, or parent training materials that they can then evaluate and refine using their clinical expertise. AI can assist with time-consuming tasks like drafting session notes, creating visual supports, or generating data summary templates, freeing practitioners to focus more time on direct clinical work. AI can help translate complex behavioral concepts into accessible language for parents, teachers, and other stakeholders.

The D.A.N.C.E. Method positions AI as a collaborative assistant rather than an autonomous decision-maker. The behavior analyst drives the process, providing context, evaluating outputs, and making all clinical decisions. The AI contributes raw material that the behavior analyst shapes into clinically appropriate, ethically sound products. This division of labor leverages the strengths of both human expertise and machine capability while mitigating the risks of each.

Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

Background & Context

The emergence of AI tools in professional practice represents one of the most significant technological shifts behavior analysts have encountered. While the field has long incorporated technology into service delivery, from video modeling to telehealth platforms, the introduction of generative AI presents qualitatively different challenges and opportunities.

Generative AI tools like ChatGPT work by predicting likely sequences of text based on patterns learned from vast amounts of training data. They do not understand the content they generate in the way humans do. They cannot verify facts, exercise clinical judgment, or account for context that was not included in the prompt. Understanding these fundamental limitations is essential for ethical use.

The D.A.N.C.E. Method addresses these limitations through a structured prompting process. While the specific steps of the method are proprietary to the training, the general approach reflects best practices in human-AI interaction. It emphasizes defining the context and constraints for the AI clearly, iterating through multiple rounds of prompting to refine outputs, maintaining the human as the final decision-maker, and evaluating all AI-generated content against professional standards before use.

The background for this approach includes several key developments in the field. First, the rapid adoption of AI tools by behavior analysts in practice, driven by the promise of efficiency gains and the availability of tools like ChatGPT, has outpaced the development of professional guidance on appropriate use. Second, the growing body of research on AI limitations, including hallucination, bias, and inconsistency, has highlighted the need for structured frameworks that prevent these limitations from affecting clinical practice. Third, the BACB Ethics Code (2022), while not specifically addressing AI, provides ethical principles that clearly apply to any tool used in professional practice.

The concept of iterative prompting, central to the D.A.N.C.E. Method, reflects the understanding that a single prompt rarely produces an optimal result. Just as a behavior analyst would not accept the first draft of a behavior intervention plan without review and refinement, they should not accept the first output of an AI tool without critical evaluation and iterative improvement. Each round of interaction provides an opportunity to add context, correct inaccuracies, adjust tone, and ensure alignment with professional standards.

The values-based component of the method is equally important. By anchoring the prompting process in the values and ethical principles of behavior analysis, the D.A.N.C.E. Method ensures that the practitioner's professional identity remains central to the interaction. The AI is a tool in the service of these values, not a replacement for them.

Clinical Implications

The clinical implications of AI integration into behavior analytic practice span the full range of professional activities. Understanding where AI can add value, where it presents risks, and how structured prompting mitigates those risks helps practitioners make informed decisions about when and how to use these tools.

In clinical documentation, AI can assist with drafting session notes, progress reports, treatment plans, and correspondence. The clinical implication is that while AI can produce efficient first drafts, the behavior analyst must review all documentation for accuracy, appropriateness, and alignment with the specific client's situation. AI-generated documentation that contains inaccuracies, misrepresents client progress, or uses language that does not reflect the behavior analyst's clinical observations could compromise the legal and clinical record.

In resource development, AI can help create parent training materials, visual supports, social stories, data collection tools, and educational handouts. The clinical benefit is significant time savings, but the risk is that AI-generated materials may contain inaccurate information about behavioral principles, use language that is not developmentally appropriate for the audience, or include recommendations that do not align with the client's specific program. All AI-generated clinical materials must be reviewed by the behavior analyst before distribution.

In clinical decision-making, AI can serve as a brainstorming partner for generating hypotheses about behavior function, identifying potential intervention strategies, or considering variables that the behavior analyst might not have initially considered. This is one of AI's most valuable applications because it can help overcome the confirmation bias and anchoring effects that affect all human decision-makers. However, the critical clinical implication is that AI recommendations must never be implemented without independent verification by the behavior analyst. The AI does not know the client, cannot observe their behavior, and cannot account for contextual variables that are essential for accurate clinical decision-making.

In professional communication, AI can assist with translating complex behavioral concepts into accessible language for parents, educators, and other professionals. This application can significantly improve the quality of stakeholder communication, but it requires careful review to ensure that the translated language accurately represents the behavioral concepts and does not introduce misunderstandings.

Data privacy is a critical clinical implication that cuts across all applications. When using AI tools, behavior analysts must never input identifiable client information into the system. AI tools typically store and may use input data for training purposes, which means that client information entered into the system may not remain confidential. The structured prompting approach should include protocols for de-identifying all information before it is entered into the AI system.

The iterative nature of the D.A.N.C.E. Method has specific clinical implications for quality. Each iteration provides an opportunity to increase the specificity, accuracy, and relevance of the AI output. A behavior analyst who uses a single, general prompt will receive a general, potentially generic response. A behavior analyst who uses the iterative process will arrive at a more specific, clinically relevant output that requires less post-generation editing.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Ethical Considerations

The ethical considerations surrounding AI use in behavior analysis are numerous and evolving. While the BACB Ethics Code (2022) does not specifically address artificial intelligence, its principles provide clear guidance for responsible AI integration.

Section 1.06 on competence is directly relevant. Behavior analysts are required to practice within their areas of competence, and this extends to the tools they use. Using AI effectively requires understanding its capabilities, limitations, and potential risks. A behavior analyst who uses AI without understanding how it works, what it can and cannot do, and how to evaluate its outputs is practicing with a tool they are not competent to use. Professional development in AI literacy is becoming an essential component of maintaining competence.

Section 2.01 on prioritizing client welfare establishes that the behavior analyst's primary obligation is to the client. This means that AI should only be used when doing so serves the client's interests, not merely the practitioner's convenience. If using AI to draft a treatment plan produces a lower-quality document than the behavior analyst would have created independently, the convenience of AI does not justify its use in that situation.

Section 2.04 on explaining behavior analytic services requires that clients and families understand the services they receive. If AI is used in the development of treatment materials, communication with families, or clinical decision-making, there may be an obligation to disclose this use, particularly if the family has expectations about the level of personal attention their case receives.

Section 3.01 on behavior analytic assessment requires that assessments be conducted in a manner that is appropriate to the client's specific circumstances. AI cannot conduct assessments. It can generate assessment templates, suggest assessment approaches, or help organize assessment data, but the actual assessment, which requires direct observation, clinical judgment, and contextual interpretation, must be conducted by the behavior analyst.

Data privacy, addressed in Section 2.06, is perhaps the most pressing ethical concern. AI tools operated by third-party companies process and potentially store user inputs. Entering identifiable client information into these systems could constitute a breach of confidentiality. Behavior analysts must develop protocols for de-identifying all information before using AI tools and must be familiar with the data handling policies of the specific AI tools they use.

The risk of overreliance on AI is an ethical concern that connects to multiple Code provisions. When behavior analysts become accustomed to AI-generated content, they may gradually reduce the level of critical evaluation they apply to that content. This drift toward automation complacency can result in accepting inaccurate information, using generic rather than individualized approaches, or missing clinical nuances that would be apparent with more effortful analysis. The D.A.N.C.E. Method's emphasis on iterative, critical interaction is designed to counteract this tendency.

Bias in AI outputs is another significant ethical concern. AI language models reflect the biases present in their training data, which can include racial, cultural, socioeconomic, and diagnostic biases. Behavior analysts must evaluate AI outputs for bias and correct any content that reflects stereotypes, assumptions, or perspectives that do not serve the interests of their clients.

Assessment & Decision-Making

Deciding when and how to use AI in behavior analytic practice requires a structured decision-making framework that balances potential benefits against potential risks. The following assessment process can guide practitioners in making thoughtful choices about AI integration.

The first assessment question is whether the task is appropriate for AI assistance. Tasks that involve generating ideas, drafting text, organizing information, or creating templates are generally appropriate for AI assistance with adequate human review. Tasks that involve clinical judgment, direct assessment, ethical decision-making, or the creation of individualized treatment recommendations based on client-specific data are generally not appropriate for AI assistance, although AI might support components of these tasks.

The second assessment question involves the stakes of the task. For low-stakes tasks such as brainstorming social skills activity ideas or drafting a general parent information handout, the consequences of an AI error are minimal and easily corrected. For high-stakes tasks such as writing a behavior intervention plan for a client with dangerous behavior or drafting a report that will be used in legal proceedings, the consequences of an error are significant, and the level of human review must be correspondingly rigorous.

The third assessment question involves confidentiality. Can the task be completed using AI without entering any identifiable client information? If not, the task is not appropriate for AI assistance unless the information can be fully de-identified while retaining its clinical relevance.

The fourth assessment question involves the practitioner's ability to evaluate the AI output. Do you have sufficient knowledge and expertise to identify errors, inaccuracies, or biases in the AI's output? If you are using AI to generate content in an area where you lack expertise, you cannot adequately evaluate the result, and the risk of accepting inaccurate information increases substantially.

Once the decision to use AI has been made, the iterative prompting process guides the interaction. Start with a prompt that provides clear context about who you are, what you are trying to accomplish, who the audience is, and what constraints apply. Evaluate the first output critically, noting areas of strength and areas that need improvement. Refine the prompt to address the weaknesses, providing additional context, correcting inaccuracies, and specifying the desired changes. Repeat this process until the output meets your professional standards.

After each AI-assisted task, reflect on the process. Did AI add value, or would you have been better served by completing the task independently? Was the iterative refinement process efficient, or did it take longer than direct creation would have? Did the AI output contain any errors or biases that you might have missed without careful review? These reflections inform your ongoing decisions about when and how to use AI in your practice.

Documenting your AI use can also support accountability and continuous improvement. Keeping a log of which tasks you use AI for, how many iterations were required, and what types of corrections you needed to make helps you develop a realistic understanding of AI's strengths and limitations in your specific practice context.

What This Means for Your Practice

AI tools are not going away, and their capabilities will continue to expand. Developing a structured, ethical approach to AI integration now positions you to benefit from these tools while maintaining the professional integrity and clinical quality that your clients deserve.

Start by learning how the AI tools you use actually work. Understanding that these tools generate text by predicting likely word sequences rather than by understanding content helps calibrate your expectations and your level of critical evaluation. These tools are remarkably capable at producing plausible text but have no mechanism for ensuring that the text is factually accurate or clinically appropriate.

Develop a personal protocol for AI use that addresses confidentiality, critical evaluation, and documentation. Decide in advance which types of tasks you will and will not use AI for, what de-identification procedures you will follow, and how you will review AI outputs before incorporating them into your practice.

Practice iterative prompting. Resist the temptation to accept the first output and instead invest in the refinement process. Each iteration improves the output and deepens your understanding of how to communicate effectively with AI tools. Over time, you will develop prompt patterns that consistently produce high-quality outputs for your most common use cases.

Finally, maintain your own clinical skills. AI is a supplement to your expertise, not a replacement for it. The behavior analyst who can independently write a strong behavior intervention plan and chooses to use AI to expedite the process is in a very different position from the behavior analyst who cannot write the plan without AI assistance. Ensure that your reliance on AI enhances your efficiency without eroding your competence.

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.

Prompting with ABA: The D.A.N.C.E.™ Method — Do Better Collective · 2 BACB Ethics CEUs · $50

Take This Course →

Research Explore the Evidence

We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Measurement and Evidence Quality

279 research articles with practitioner takeaways

View Research →

Symptom Screening and Profile Matching

258 research articles with practitioner takeaways

View Research →

Brief Functional Analysis Methods

239 research articles with practitioner takeaways

View Research →
CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics