These answers draw in part from “Prompting with ABA: The D.A.N.C.E.™ Method” (Do Better Collective), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →The D.A.N.C.E. Method is a values-based, iterative prompting framework designed specifically for behavior analysts. Unlike general AI prompting, which focuses on getting useful outputs from AI tools, the D.A.N.C.E. Method integrates professional values and ethical considerations into every step of the prompting process. It emphasizes the behavior analyst's role as the decision-maker, the importance of iterative refinement rather than accepting first outputs, and the need to evaluate all AI-generated content against professional and ethical standards. The method provides a structured process that helps practitioners maintain their clinical judgment and professional identity while leveraging AI's capabilities.
AI can assist with drafting components of behavior intervention plans, such as generating intervention strategy descriptions, formatting templates, or suggesting antecedent modifications. However, the clinical content of a behavior intervention plan must be based on the specific client's assessment data, functional analysis results, and individual circumstances, none of which the AI has access to. The behavior analyst must provide the clinical substance and use AI only for support tasks like organizing, formatting, or generating language options. All AI-generated content must be thoroughly reviewed and modified to ensure it accurately reflects the client's specific situation.
The most significant ethical risks include confidentiality breaches when identifiable client information is entered into AI systems, overreliance on AI that leads to reduced critical thinking and clinical judgment, acceptance of inaccurate AI-generated information without adequate verification, bias in AI outputs that reflects stereotypes or assumptions harmful to clients, and the potential for AI efficiency to replace the thoughtful, individualized analysis that clients deserve. Each of these risks can be mitigated through structured protocols, critical evaluation habits, and a clear understanding that AI is a tool to be used under professional supervision, not an autonomous clinical resource.
The primary protection is never entering identifiable client information into AI systems. This includes names, dates of birth, locations, diagnoses combined with other identifying details, and any information that could be used to identify the individual. Before using AI for any clinical task, de-identify all information by replacing names with generic labels, removing specific demographic details, and using general descriptions rather than specific client histories. Be aware that some AI tools store input data and may use it for training purposes. Review the privacy policies of any AI tool you use and select tools that offer the strongest data protection when possible.
AI hallucination refers to the tendency of language models to generate plausible-sounding but factually incorrect information. The AI may cite nonexistent research, describe procedures that do not exist, state incorrect statistics, or provide recommendations that sound reasonable but are not supported by evidence. Behavior analysts should be particularly concerned because hallucinated clinical information, if accepted without verification, could lead to inappropriate assessment procedures, ineffective interventions, or misinformation shared with families and other professionals. The iterative prompting approach helps mitigate this risk by providing multiple opportunities to identify and correct inaccuracies.
Yes, with appropriate safeguards. AI can be a valuable tool for creating accessible parent training materials, translating behavioral concepts into plain language, and generating visual supports or activity suggestions. The ethical requirements are that all materials be reviewed by the behavior analyst for accuracy and appropriateness before distribution, that the materials be individualized to the specific family's needs rather than used generically, that the materials be free of inaccuracies or misleading information, and that the family receives credit for being informed about how materials were developed if they ask. The time savings from AI-assisted material development can be reinvested in direct interaction with families.
AI can support clinical decision-making by serving as a brainstorming partner. For example, you might describe a behavioral scenario to the AI and ask it to suggest potential maintaining variables or intervention approaches. The AI may generate ideas you had not considered, helping you think more broadly about the case. However, these suggestions are hypotheses to be evaluated against your clinical data and knowledge, not recommendations to be implemented. The AI cannot observe behavior, conduct assessments, or account for contextual variables that are essential for accurate clinical decisions. Your professional judgment remains the final authority.
Address the error promptly and transparently. Contact the recipient of the document, identify the specific error, provide the corrected information, and explain that you are implementing stronger review procedures to prevent similar errors. If the error has clinical implications, such as an incorrect procedure description in a parent training handout, ensure that the corrected information is understood and that any incorrect implementations are addressed. Use the experience as a learning opportunity to strengthen your AI review protocols. Document the incident and the corrective actions taken.
Iterative prompting improves output quality by progressively narrowing and refining the AI's response through multiple rounds of interaction. The first prompt provides general direction, and subsequent prompts add specificity, correct errors, adjust tone, incorporate constraints, and align the output with professional standards. Each iteration is informed by your evaluation of the previous output, creating a feedback loop that mirrors the shaping process familiar to behavior analysts. Over time, this iterative approach produces outputs that are significantly more accurate, relevant, and clinically useful than single-prompt outputs, while also deepening the practitioner's skill in communicating effectively with AI tools.
While there is no current BACB requirement specifically addressing AI disclosure, the ethical principles of transparency and informed consent suggest that disclosure is appropriate in many situations. If AI is used to generate materials that families will use, to draft communications that will be shared with families, or to support clinical decisions, families have a reasonable interest in knowing that AI played a role. The disclosure need not be alarming. A simple statement that you use AI tools to support certain administrative and resource development tasks, that all AI-generated content is reviewed by you before use, and that the AI never has access to their family's identifying information is usually sufficient.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Prompting with ABA: The D.A.N.C.E.™ Method — Do Better Collective · 2 BACB Ethics CEUs · $50
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
279 research articles with practitioner takeaways
258 research articles with practitioner takeaways
239 research articles with practitioner takeaways
2 BACB Ethics CEUs · $50 · Do Better Collective
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.