These answers draw in part from “Ethical Integration of Artificial Intelligence in ABA: A Framework for Subject Matter Expert Involvement in Software Development” by Shannon Hill, PhD, BCBA-D, LBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →A behavior analyst serving as an SME in AI development provides clinical guidance throughout the software development process. This may include defining clinical requirements for the tool, reviewing and validating training data for accuracy, evaluating AI outputs against clinical standards, identifying edge cases and potential failure modes, ensuring that user interfaces support clinical workflow, advocating for ethical safeguards such as data privacy and human oversight, and participating in validation studies. The role requires translating clinical concepts into terms that software engineers and product managers can understand and implement. The time commitment and formality of the role vary from occasional consultation to ongoing involvement throughout the development lifecycle.
Behavior analysts bring unique expertise that is essential for developing AI tools that serve the field effectively. They understand the principles of behavior analysis that should guide clinical features, the ethical standards that govern practice, the realities of clinical workflows, and the diverse needs of the populations served. Without behavior analyst input, software developers may make assumptions about ABA practice that lead to clinically inappropriate features, may not understand the ethical implications of design decisions, and may create tools that do not fit how practitioners actually work. Other professionals such as psychologists or educators may contribute valuable perspectives, but the ABA-specific clinical and ethical expertise can only come from trained behavior analysts.
Behavior analysts who serve as AI SMEs benefit from developing several supplementary skills. These include basic AI literacy, understanding how machine learning works at a conceptual level; communication skills for translating clinical concepts to non-clinical audiences; project management awareness to understand software development timelines and processes; critical evaluation skills for assessing AI outputs against clinical standards; ethical reasoning skills applied to novel technological contexts; and an understanding of data governance and privacy regulations. These skills supplement existing clinical expertise rather than replacing it. Many can be developed through continuing education, self-directed learning, and mentorship from colleagues who have experience in technology roles.
Complete elimination of bias from AI systems is not currently achievable, but behavior analysts can advocate for and verify several bias mitigation strategies. These include ensuring that training data represents the diversity of populations the tool will serve, testing AI outputs across different demographic groups to identify disparities, establishing ongoing monitoring for bias after deployment, including diverse perspectives in the development team and testing process, and creating mechanisms for end users to report potential bias in AI outputs. Behavior analysts should be particularly attentive to biases related to race, ethnicity, socioeconomic status, language, and disability that could affect the quality of clinical recommendations.
Financial relationships with AI companies can create conflicts of interest that compromise objectivity. A behavior analyst who is paid by a technology company may face pressure, explicit or implicit, to endorse the company's product even if clinical concerns exist. To manage these conflicts, behavior analysts should maintain clear contractual boundaries that protect their right to express clinical concerns, disclose financial relationships when recommending or evaluating AI tools, separate their SME role from any product endorsement or marketing activities, and be prepared to withdraw from the relationship if the company does not adequately address clinical or ethical concerns. The BACB Ethics Code (2022) requires behavior analysts to identify and address conflicts of interest proactively.
Apply the same critical evaluation standards you would use for any clinical intervention. Look for peer-reviewed research or independent validation studies rather than relying on vendor-provided statistics. Examine the validation methodology, including sample size, population characteristics, comparison conditions, and outcome measures. Ask whether the validation was conducted with populations representative of your clients. Consider whether the reported accuracy metrics are clinically meaningful, not just statistically significant. Be skeptical of tools that claim high accuracy without providing transparent information about their validation process. If a tool cannot provide adequate evidence of clinical validity, treat its outputs as preliminary rather than authoritative.
When an AI tool makes a clinical error, responsibility falls on the behavior analyst who accepted and implemented the AI's output. The BACB Ethics Code (2022) places responsibility for clinical decisions on the certified behavior analyst, regardless of what tools were used in the decision-making process. This is why human oversight and review of all AI-generated clinical content is essential. When errors are identified, the immediate priority is addressing any impact on client care. The error should then be documented, reported to the AI vendor, and analyzed to determine whether it represents a systematic issue requiring changes to how the tool is used. Organizations should have protocols for managing AI-related errors before they occur.
Behavior analysts with appropriate technical skills can certainly develop their own AI tools, and some are doing so. However, the competence requirement of the BACB Ethics Code (2022) applies. Behavior analysts developing AI tools should have genuine competence in both behavior analysis and software development, or they should partner with software engineers who provide the technical expertise while the behavior analyst provides clinical direction. The same ethical standards for clinical validity, data privacy, bias mitigation, and human oversight apply regardless of whether the tool is developed by a behavior analyst or a technology company. Self-developed tools should be validated as rigorously as commercially developed ones.
Key data privacy considerations include how client data is collected, stored, and transmitted; whether data is encrypted at rest and in transit; whether the AI system uses client data to train or improve its models and whether this is disclosed; who has access to client data within the AI vendor's organization; how long data is retained and how it is disposed of; whether the system complies with HIPAA and applicable state privacy laws; what happens to data if the vendor is acquired or ceases operations; and whether clients can request deletion of their data. Behavior analysts should insist that these questions are answered satisfactorily before any client data enters an AI system.
As AI integration in ABA accelerates, the SME role is likely to become more formalized and specialized. Professional organizations may develop guidelines specifically for behavior analyst involvement in technology development. Graduate training programs may incorporate AI literacy into their curricula. New career paths may emerge for behavior analysts who specialize in health technology, with roles such as clinical AI specialist or behavioral technology consultant. The field may also develop certification or credential programs for behavior analysts who serve in technology advisory roles. Practitioners who develop AI literacy and SME skills now will be well-positioned for these emerging opportunities.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Ethical Integration of Artificial Intelligence in ABA: A Framework for Subject Matter Expert Involvement in Software Development — Shannon Hill · 1 BACB Ethics CEUs · $30
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
279 research articles with practitioner takeaways
239 research articles with practitioner takeaways
194 research articles with practitioner takeaways
1 BACB Ethics CEUs · $30 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.