This guide draws in part from “Ethical Integration of Artificial Intelligence in ABA: A Framework for Subject Matter Expert Involvement in Software Development” by Shannon Hill, PhD, BCBA-D, LBA (BehaviorLive), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →As artificial intelligence increasingly influences how ABA services are designed, delivered, and evaluated, the role of behavior analysts extends beyond clinical practice into the realm of technology development. This course examines the critical need for behavior analysts to serve as Subject Matter Experts in AI software development, ensuring that the tools built for the field reflect clinical best practices, ethical standards, and the nuanced realities of ABA service delivery.
The clinical significance of this topic is both immediate and far-reaching. AI tools that are developed without adequate input from behavior analysts risk embedding misconceptions about behavior analysis, making clinically inappropriate recommendations, and creating workflows that do not align with how services are actually delivered. When behavior analysts are absent from the development process, software engineers and product managers make decisions about clinical features based on incomplete understanding, and the resulting tools may do more harm than good.
Conversely, when behavior analysts actively participate in software development as SMEs, the resulting tools are more likely to support effective clinical practice, align with ethical standards, and address genuine needs in the field. This participation requires behavior analysts to develop new skills, including the ability to communicate clinical concepts to non-clinical audiences, evaluate technical proposals through a clinical lens, and advocate for client welfare in technology design decisions.
The course provides a comprehensive framework for ethical AI implementation and SME involvement, addressing both the macro-level question of how the field should approach AI and the micro-level question of how individual behavior analysts can contribute to technology development. This dual focus makes the content relevant to practitioners who may find themselves consulted by technology companies, behavior analysts working in organizations that are developing proprietary AI tools, and clinical leaders who must evaluate AI products for their organizations.
The clinical significance extends to client safety. AI tools that process sensitive client data, generate clinical recommendations, or automate aspects of care delivery must be designed with safeguards that protect client welfare. Behavior analysts, as the professionals most familiar with ABA clinical standards and ethics, are uniquely positioned to identify potential risks and advocate for appropriate protections during the development process.
This topic also has workforce implications. As AI tools become more prevalent in ABA, practitioners who understand the technology and can bridge the gap between clinical practice and software development will be increasingly valuable to organizations and the field as a whole.
The intersection of AI and healthcare is not new, but its application to ABA is in relatively early stages. In fields such as radiology, dermatology, and pathology, AI tools have been in clinical use for years, supported by substantial research on their accuracy, reliability, and clinical impact. ABA is now following this trajectory, but with the added complexity that behavioral interventions are inherently more individualized and context-dependent than many medical applications.
The software development process for healthcare AI typically involves several phases: problem identification, data collection and preparation, model development, testing and validation, deployment, and ongoing monitoring. At each phase, clinical expertise is essential for ensuring that the tool serves its intended purpose and does not introduce unintended risks. In fields where SME involvement is robust, AI tools tend to be more clinically useful and safer. In fields where SME involvement is minimal, tools are more likely to produce errors, misalign with clinical practice, or fail to gain practitioner adoption.
The Subject Matter Expert role in software development is well-established in other industries. In healthcare, SMEs provide guidance on clinical workflows, validate algorithm outputs against clinical standards, identify edge cases and potential failure modes, and ensure that user interfaces support rather than hinder clinical decision-making. For behavior analysts, this role requires translating behavioral concepts into terms that software engineers can understand and implement.
The current landscape of AI in ABA includes both established companies developing AI features for existing practice management platforms and startups creating dedicated AI tools for behavior analysis. The quality and clinical validity of these tools vary considerably. Some are developed with extensive clinical input, while others are built primarily by technologists with limited understanding of ABA practice.
The BACB has not yet issued specific guidance on AI in behavior analysis, leaving practitioners to navigate this territory using existing ethical principles. The profession is at a critical juncture where the decisions made about AI integration in the next several years will shape the field's relationship with technology for decades to come. Behavior analysts who engage with this process now have the opportunity to influence that trajectory.
The framework presented in this course addresses a genuine gap in the field's resources. While there is growing awareness that AI needs ethical oversight, there has been limited practical guidance on how behavior analysts should participate in the technology development process. This course provides that guidance through a structured approach that can be adapted to various organizational contexts.
The clinical implications of behavior analyst involvement in AI development extend to every practitioner who uses AI-enhanced tools in their practice. When SME input shapes the development process, the resulting tools are more likely to support high-quality clinical care.
One of the most direct clinical implications concerns the accuracy of AI-generated clinical content. AI systems that generate treatment plan language, session note templates, or progress reports can only be as accurate as their training data and the clinical framework that guided their development. When behavior analysts serve as SMEs, they can review and correct training data, validate output against clinical standards, and identify scenarios where the AI's recommendations would be clinically inappropriate. Without this oversight, AI tools may produce plausible-sounding but clinically inaccurate content that practitioners might not catch during routine review.
The design of user interfaces for AI tools has clinical implications that are easy to overlook. An interface that presents AI recommendations with high confidence scores may subtly discourage practitioners from questioning those recommendations. An interface that requires explicit practitioner approval before AI outputs are finalized encourages critical evaluation. These design choices, made during the development process, directly influence how practitioners interact with AI and whether human judgment is maintained or eroded.
Data quality and representation in AI training sets have clinical implications for equity. If an AI tool for ABA is trained primarily on data from one type of service provider, one geographic region, or one population, its outputs may not generalize well to other contexts. Behavior analysts serving as SMEs can advocate for diverse and representative training data that ensures the tool works effectively across the populations it will serve.
The clinical implications also extend to the validation process. Behavior analysts who understand research methodology can help design appropriate validation studies for AI tools, ensuring that claims of accuracy and clinical utility are supported by evidence rather than marketing assertions. This validation is essential for practitioners who must make informed decisions about which tools to adopt.
Workflow integration is a clinical concern because poorly integrated tools can disrupt clinical practice rather than enhancing it. An AI tool that requires practitioners to enter data in a different format, switch between multiple platforms, or complete additional steps may actually reduce efficiency and introduce errors. Behavior analysts who understand clinical workflows can ensure that AI tools are designed to fit naturally into existing practice rather than requiring practitioners to reorganize their work around the technology.
Finally, the SME role includes advocating for safeguards that protect clients. This includes data privacy protections, mechanisms for flagging and correcting AI errors, clear documentation of when AI was involved in clinical decisions, and protocols for overriding AI recommendations when clinical judgment indicates a different course of action.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
The ethical framework for behavior analyst involvement in AI software development draws on multiple provisions of the BACB Ethics Code (2022) and extends them into new territory that the code's authors likely did not anticipate but that its principles clearly address.
Code 1.05 (Practicing Within Scope of Competence) has dual application in this context. First, behavior analysts who serve as SMEs must understand enough about software development to contribute meaningfully, which may require developing new competencies. Second, the SME role itself must remain within the behavior analyst's areas of expertise, which means providing guidance on clinical practice, ethical standards, and client needs rather than making technical decisions about algorithms or software architecture.
Code 2.01 (Providing Effective Treatment) extends to the tools used in treatment delivery. When a behavior analyst endorses or contributes to an AI tool that is subsequently used in clinical practice, they bear some responsibility for ensuring that the tool supports effective treatment. This responsibility requires ongoing involvement beyond initial development, including participation in validation studies and post-deployment monitoring.
The ethical principle of non-maleficence, doing no harm, is central to the SME role in AI development. Behavior analysts must anticipate potential harms that AI tools could cause, including clinical errors, privacy breaches, algorithmic bias, and the erosion of clinical judgment. The SME's role is to identify these potential harms before they reach clients and to advocate for design choices that minimize risk.
Code 2.06 (Maintaining Confidentiality) creates specific obligations for behavior analysts involved in AI development. They must ensure that any client data used in development is properly de-identified, that data storage and processing meet HIPAA requirements, and that the AI system's data practices protect client privacy throughout the tool's lifecycle.
Conflict of interest considerations arise when behavior analysts serve as paid consultants to AI companies. Financial relationships with technology vendors can create incentives to endorse tools that may not be in clients' best interest. The BACB Ethics Code (2022) requires behavior analysts to identify and manage conflicts of interest, which in this context means maintaining independence in clinical evaluations of AI tools regardless of financial relationships with their developers.
The ethical obligation of transparency extends to the SME's relationship with both the technology developer and the end users of the tool. Behavior analysts should be transparent about the limitations of their input, the areas where their expertise does not extend, and any concerns they have about the tool's design or implementation. They should also advocate for transparency to end users about how the AI works, what its limitations are, and when it was involved in generating clinical content.
There is also an ethical dimension to inaction. Behavior analysts who choose not to engage with AI development leave the field's representation to others who may not share their clinical expertise or ethical commitments. While not every behavior analyst needs to serve as an SME, the profession as a whole has a responsibility to ensure that AI tools built for ABA reflect the field's values and standards.
Behavior analysts who serve as SMEs in AI development need a structured framework for evaluating AI proposals, assessing tool outputs, and making recommendations about deployment. Similarly, behavior analysts who evaluate AI tools for adoption in their organizations need a systematic approach to assessment.
The framework presented in this course provides a structured evaluation approach organized around several key dimensions. The first is clinical alignment: does the AI tool's intended function align with established ABA principles and practices? A tool that claims to identify the function of behavior based solely on topography data, for example, contradicts the fundamental ABA principle that function is determined by the relationship between behavior and its environment, not by the form of the behavior alone.
Evidence evaluation is the second dimension. What evidence supports the AI tool's accuracy and clinical utility? This includes the quality of the training data, the validation methodology, the reported accuracy metrics, and whether the tool has been tested with populations representative of its intended users. Behavior analysts should apply the same standards they would use to evaluate any clinical tool or intervention.
Ethical compliance assessment examines whether the AI tool meets the ethical standards outlined in the BACB Ethics Code (2022) and relevant regulations. This includes data privacy practices, informed consent mechanisms, bias mitigation strategies, and provisions for human oversight. Tools that cannot demonstrate ethical compliance should not be endorsed or adopted regardless of their technical capabilities.
Usability assessment considers whether the tool can be realistically integrated into clinical practice. The best AI tool from a technical perspective may fail if practitioners cannot easily learn and use it, if it does not integrate with existing systems, or if it disrupts established workflows. Behavior analysts should evaluate usability through the lens of the practitioners who will use the tool daily.
Risk assessment identifies potential negative consequences of the tool and evaluates whether safeguards are adequate to mitigate those risks. This includes considering worst-case scenarios, such as what happens if the AI makes a significant clinical error, if client data is breached, or if the tool's developer ceases operations. Adequate risk mitigation should be a prerequisite for deployment.
Post-deployment monitoring plans should be established before a tool is adopted. How will the tool's accuracy be tracked over time? How will user feedback be collected and addressed? What metrics will determine whether the tool is achieving its intended benefits? Who is responsible for ongoing oversight? These questions should be answered before deployment, not after problems emerge.
Decision-making about whether to serve as an SME should also follow a structured process. Consider whether the development project aligns with your values, whether you have the necessary expertise, whether you can maintain independence from financial pressures, and whether the project has a reasonable chance of producing a tool that benefits the field and its clients.
The growing intersection of AI and ABA creates opportunities and responsibilities for every behavior analyst, not just those who will serve as formal SMEs in software development.
For practitioners in direct clinical roles, developing AI literacy is increasingly important. Understanding the basics of how AI works, what it can and cannot do, and what questions to ask about AI tools will help you evaluate the technologies your organization adopts and use them effectively while maintaining appropriate skepticism.
If you are approached to serve as an SME by a technology company or if your organization is developing AI tools internally, the framework from this course provides a systematic approach. Evaluate the project's clinical alignment, ethical compliance, and potential impact on client welfare. Contribute your clinical expertise while recognizing the boundaries of your technical knowledge. Maintain independence in your clinical evaluations regardless of financial relationships.
For clinical leaders and organizational decision-makers, establish processes for evaluating AI tools before they are adopted. Require evidence of clinical validity, ethical compliance, and data security. Involve clinical staff in the evaluation process and respect their concerns about tool limitations or risks. Create ongoing monitoring systems to ensure that adopted tools continue to meet clinical and ethical standards.
Advocate within professional organizations for the development of field-specific standards for AI in ABA. The current absence of standards creates risk for the entire field. By contributing to the development of guidelines, position papers, and ethical standards for AI use, you help shape the technology landscape in ways that protect clients and support practitioners.
Remember that your clinical expertise is valuable and irreplaceable. AI tools cannot replicate the judgment, empathy, and ethical reasoning that behavior analysts bring to their work. Your role in AI development is to ensure that technology amplifies these human qualities rather than diminishing them.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Ethical Integration of Artificial Intelligence in ABA: A Framework for Subject Matter Expert Involvement in Software Development — Shannon Hill · 1 BACB Ethics CEUs · $30
Take This Course →We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
279 research articles with practitioner takeaways
239 research articles with practitioner takeaways
194 research articles with practitioner takeaways
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.