By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts
Augmented intelligence refers to AI systems designed to enhance human decision-making rather than replace it. In the FBA context, augmented intelligence tools assist the behavior analyst with tasks like organizing indirect assessment data, identifying patterns across informant reports, and generating preliminary hypotheses. The behavior analyst retains full authority over clinical decisions. This contrasts with autonomous AI, which would independently generate and act on conclusions without human oversight. The augmented model preserves the clinician's role while leveraging computational advantages for data processing tasks.
Indirect assessment data synthesis is the strongest candidate because it involves processing large volumes of verbal report data where pattern recognition adds efficiency. Organizing ABC data into visual displays and frequency summaries is another suitable application. Hypothesis generation can benefit from AI input when treated as a preliminary suggestion. Direct observation and functional analysis, which depend on real-time clinical judgment and environmental manipulation, are less suited for current AI involvement because they require contextual reasoning that algorithms do not reliably replicate.
Examine the tool's transparency about its methodology and the training data used to develop its model. Look for validation studies comparing the tool's outputs to human clinical judgment. Review data security practices including storage location, encryption, data retention policies, and whether client data is used to further train the model. Check whether the tool's intended scope matches your intended use. Ask the vendor directly about algorithmic bias testing and representation in the training dataset. If the tool cannot provide satisfactory answers to these questions, exercise caution before integrating it into clinical practice.
Key concerns include where client data is stored, whether data is encrypted in transit and at rest, who within the platform vendor can access the data, whether data is used to train or improve the AI model, and whether the data can be adequately de-identified. HIPAA compliance is a baseline requirement, but behavior analysts should look beyond minimum compliance to understand the full data lifecycle. Some platforms retain data indefinitely or share aggregated data with third parties. These practices should be evaluated against both legal requirements and professional ethical standards for confidentiality.
Yes. AI models are trained on datasets that may not represent the full diversity of clients behavior analysts serve. If the training data overrepresents certain populations, diagnoses, or behavioral functions, the model's outputs will reflect those biases. For example, a model trained primarily on data from center-based programs may generate less accurate hypotheses for home-based or school-based contexts. Behavior analysts should inquire about training data composition and treat AI hypotheses with additional scrutiny when working with populations that may be underrepresented in the tool's development.
Traditional IOA measures agreement between two human observers on the same behavioral event. When applied to AI-human comparisons, IOA-like metrics measure convergence between algorithmic pattern recognition and clinical judgment. High agreement suggests the AI is capturing functional relationships consistent with human expertise. Low agreement signals either an AI limitation or a discrepancy worth investigating through additional direct assessment. This comparison is useful for validating AI tools, but it should not be interpreted as equivalence between human and algorithmic reasoning processes.
Ethical practice strongly supports disclosure. Families have a right to understand how their assessment data is being processed, including whether it is entered into third-party technology platforms. Prepare a clear, non-technical explanation of the tool's role, emphasizing that it assists with data organization while you retain all clinical decision-making authority. Address data privacy proactively and give families the opportunity to ask questions. While regulatory requirements around AI disclosure are still evolving, proactive transparency aligns with informed consent obligations in the ethics code.
Reviewing an AI-generated hypothesis before conducting direct observation may prime the observer to attend selectively to data that confirms the suggested function. For example, if the AI suggests escape-maintained behavior, the observer may unconsciously give greater weight to demand-related antecedents while underweighting other environmental events. To mitigate this, use structured observation protocols with predetermined recording criteria, consider having a second observer who is blind to the AI hypothesis, and explicitly document observations that contradict the AI's preliminary suggestion.
Supervisors should ensure trainees can independently conduct all FBA components before introducing AI tools. When AI is incorporated, require trainees to articulate their own clinical reasoning separately from the AI output and explain any discrepancies between their hypothesis and the tool's suggestion. Use supervision sessions to probe whether the trainee understands the behavioral principles underlying their conclusions or is simply accepting algorithmic output. Establishing these practices prevents the erosion of foundational clinical skills while preparing trainees for a technology-integrated practice landscape.
Currently, the evidence base for AI-assisted FBA is in its early stages. Most available data comes from proof-of-concept demonstrations and developer case studies rather than controlled outcome research. The field would benefit from studies comparing treatment effectiveness, assessment efficiency, and family satisfaction between AI-assisted and traditional FBA processes. Until such evidence is available, behavior analysts should approach AI tools as potentially useful adjuncts while maintaining the proven assessment methods that constitute the field's established standard of care.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
THE AUGMENTED ASSESSOR: Conducting FBAs with AI — Adam Ventura · 1.5 BACB Ethics CEUs · $0
Take This Course →1.5 BACB Ethics CEUs · $0 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.