Starts in:

By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts

Frequently Asked Questions About Using AI in Functional Behavior Assessments

Questions Covered
  1. What is augmented intelligence and how does it differ from artificial intelligence in the FBA context?
  2. Which components of the FBA process are most appropriate for AI assistance?
  3. How should I evaluate an AI tool before using it in clinical assessments?
  4. What are the data privacy concerns with entering FBA data into an AI platform?
  5. Can AI-generated functional hypotheses be biased?
  6. How does interobserver agreement apply when comparing AI outputs to human assessments?
  7. Do I need to disclose AI use to families during the FBA process?
  8. What risks does AI involvement introduce for confirmation bias during direct observation?
  9. How should supervisors handle trainee use of AI-assisted FBA tools?
  10. Is there evidence that AI-assisted FBAs produce better clinical outcomes than traditional FBAs?

1. What is augmented intelligence and how does it differ from artificial intelligence in the FBA context?

Augmented intelligence refers to AI systems designed to enhance human decision-making rather than replace it. In the FBA context, augmented intelligence tools assist the behavior analyst with tasks like organizing indirect assessment data, identifying patterns across informant reports, and generating preliminary hypotheses. The behavior analyst retains full authority over clinical decisions. This contrasts with autonomous AI, which would independently generate and act on conclusions without human oversight. The augmented model preserves the clinician's role while leveraging computational advantages for data processing tasks.

2. Which components of the FBA process are most appropriate for AI assistance?

Indirect assessment data synthesis is the strongest candidate because it involves processing large volumes of verbal report data where pattern recognition adds efficiency. Organizing ABC data into visual displays and frequency summaries is another suitable application. Hypothesis generation can benefit from AI input when treated as a preliminary suggestion. Direct observation and functional analysis, which depend on real-time clinical judgment and environmental manipulation, are less suited for current AI involvement because they require contextual reasoning that algorithms do not reliably replicate.

3. How should I evaluate an AI tool before using it in clinical assessments?

Examine the tool's transparency about its methodology and the training data used to develop its model. Look for validation studies comparing the tool's outputs to human clinical judgment. Review data security practices including storage location, encryption, data retention policies, and whether client data is used to further train the model. Check whether the tool's intended scope matches your intended use. Ask the vendor directly about algorithmic bias testing and representation in the training dataset. If the tool cannot provide satisfactory answers to these questions, exercise caution before integrating it into clinical practice.

4. What are the data privacy concerns with entering FBA data into an AI platform?

Key concerns include where client data is stored, whether data is encrypted in transit and at rest, who within the platform vendor can access the data, whether data is used to train or improve the AI model, and whether the data can be adequately de-identified. HIPAA compliance is a baseline requirement, but behavior analysts should look beyond minimum compliance to understand the full data lifecycle. Some platforms retain data indefinitely or share aggregated data with third parties. These practices should be evaluated against both legal requirements and professional ethical standards for confidentiality.

5. Can AI-generated functional hypotheses be biased?

Yes. AI models are trained on datasets that may not represent the full diversity of clients behavior analysts serve. If the training data overrepresents certain populations, diagnoses, or behavioral functions, the model's outputs will reflect those biases. For example, a model trained primarily on data from center-based programs may generate less accurate hypotheses for home-based or school-based contexts. Behavior analysts should inquire about training data composition and treat AI hypotheses with additional scrutiny when working with populations that may be underrepresented in the tool's development.

6. How does interobserver agreement apply when comparing AI outputs to human assessments?

Traditional IOA measures agreement between two human observers on the same behavioral event. When applied to AI-human comparisons, IOA-like metrics measure convergence between algorithmic pattern recognition and clinical judgment. High agreement suggests the AI is capturing functional relationships consistent with human expertise. Low agreement signals either an AI limitation or a discrepancy worth investigating through additional direct assessment. This comparison is useful for validating AI tools, but it should not be interpreted as equivalence between human and algorithmic reasoning processes.

7. Do I need to disclose AI use to families during the FBA process?

Ethical practice strongly supports disclosure. Families have a right to understand how their assessment data is being processed, including whether it is entered into third-party technology platforms. Prepare a clear, non-technical explanation of the tool's role, emphasizing that it assists with data organization while you retain all clinical decision-making authority. Address data privacy proactively and give families the opportunity to ask questions. While regulatory requirements around AI disclosure are still evolving, proactive transparency aligns with informed consent obligations in the ethics code.

8. What risks does AI involvement introduce for confirmation bias during direct observation?

Reviewing an AI-generated hypothesis before conducting direct observation may prime the observer to attend selectively to data that confirms the suggested function. For example, if the AI suggests escape-maintained behavior, the observer may unconsciously give greater weight to demand-related antecedents while underweighting other environmental events. To mitigate this, use structured observation protocols with predetermined recording criteria, consider having a second observer who is blind to the AI hypothesis, and explicitly document observations that contradict the AI's preliminary suggestion.

9. How should supervisors handle trainee use of AI-assisted FBA tools?

Supervisors should ensure trainees can independently conduct all FBA components before introducing AI tools. When AI is incorporated, require trainees to articulate their own clinical reasoning separately from the AI output and explain any discrepancies between their hypothesis and the tool's suggestion. Use supervision sessions to probe whether the trainee understands the behavioral principles underlying their conclusions or is simply accepting algorithmic output. Establishing these practices prevents the erosion of foundational clinical skills while preparing trainees for a technology-integrated practice landscape.

10. Is there evidence that AI-assisted FBAs produce better clinical outcomes than traditional FBAs?

Currently, the evidence base for AI-assisted FBA is in its early stages. Most available data comes from proof-of-concept demonstrations and developer case studies rather than controlled outcome research. The field would benefit from studies comparing treatment effectiveness, assessment efficiency, and family satisfaction between AI-assisted and traditional FBA processes. Until such evidence is available, behavior analysts should approach AI tools as potentially useful adjuncts while maintaining the proven assessment methods that constitute the field's established standard of care.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.

THE AUGMENTED ASSESSOR: Conducting FBAs with AI — Adam Ventura · 1.5 BACB Ethics CEUs · $0

Take This Course →
📚 Browse All 60+ Free CEUs — ethics, supervision & clinical topics in The ABA Clubhouse

Related Topics

CEU Course: THE AUGMENTED ASSESSOR: Conducting FBAs with AI

1.5 BACB Ethics CEUs · $0 · BehaviorLive

Guide: THE AUGMENTED ASSESSOR: Conducting FBAs with AI — What Every BCBA Needs to Know

Research-backed educational guide with practice recommendations

Decision Guide: Comparing Approaches

Side-by-side comparison with clinical decision framework

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics