Starts in:

Frequently Asked Questions About Validity in Functional Assessment

Source & Transformation

These answers draw in part from “Validity in Functional Assessment” by Jeffrey Tiger, Ph.D. BCBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
Questions Covered
  1. What is discriminant validity in the context of functional assessment?
  2. What is outcome validity and why is it considered the most important validity dimension?
  3. How do sensitivity and specificity differ in evaluating functional assessment?
  4. What is the difference between synthesized and isolated reinforcement contingencies in functional analysis?
  5. What are the validity limitations of indirect functional assessment methods?
  6. How does descriptive assessment validity compare to functional analysis validity?
  7. When should practitioners consider that their functional assessment may have produced invalid results?
  8. How does multiply maintained behavior affect functional assessment validity?
  9. What role does treatment validation play in confirming assessment validity?
  10. How can practitioners improve the validity of their functional assessments?
Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

1. What is discriminant validity in the context of functional assessment?

Discriminant validity refers to the ability of a functional assessment to correctly distinguish between different behavioral functions. An assessment with high discriminant validity can reliably differentiate between behavior maintained by escape, attention, tangible access, and automatic reinforcement. When discriminant validity is poor, the assessment may incorrectly identify the function, leading to mismatched interventions. For example, an assessment that consistently identifies attention as a function when behavior is actually maintained by escape has poor discriminant validity for these two functions. Discriminant validity can be evaluated by examining whether assessment results lead to accurate identification of functions that are confirmed through subsequent treatment validation.

2. What is outcome validity and why is it considered the most important validity dimension?

Outcome validity evaluates whether assessment results lead to effective treatment. An assessment has high outcome validity when interventions designed based on its findings successfully reduce problem behavior. It is considered the most important validity dimension because it directly connects assessment accuracy to clinical outcomes, which is the ultimate purpose of conducting the assessment. An assessment that produces interesting data but does not improve treatment effectiveness has limited clinical value. Outcome validity can be evaluated by tracking whether function-based interventions produce the expected behavior change, with successful treatment serving as confirmation that the assessment correctly identified the maintaining variables.

3. How do sensitivity and specificity differ in evaluating functional assessment?

Sensitivity measures the ability to correctly identify a function when it is actually present, essentially the true positive rate. An assessment with high sensitivity for escape-maintained behavior will correctly identify escape as a function in most cases where it is truly operating. Specificity measures the ability to correctly identify when a function is absent, the true negative rate. An assessment with high specificity will rarely produce false positive identifications. The two metrics are complementary: high sensitivity means few missed functions (low false negatives) while high specificity means few incorrect identifications (low false positives). Most assessment methods involve tradeoffs between these two properties.

4. What is the difference between synthesized and isolated reinforcement contingencies in functional analysis?

Isolated contingencies test each potential reinforcer separately across distinct test conditions. For example, an escape condition arranges only escape contingencies for problem behavior, while an attention condition arranges only attention contingencies. Synthesized contingencies combine multiple reinforcement variables within a single test condition to better represent the complex contingency arrangements found in natural environments. For example, a synthesized condition might arrange both escape and attention as consequences for problem behavior. Isolated contingencies provide cleaner experimental control and may be easier to interpret, while synthesized contingencies may better capture behavior maintained by multiple variables or behavior that only occurs under complex environmental conditions.

5. What are the validity limitations of indirect functional assessment methods?

Indirect methods such as interviews and rating scales rely on informant report, which introduces several validity concerns. Informants may have limited observation of the behavior, leading to incomplete information. They may be influenced by their own interpretations and biases when describing behavioral patterns. They may have difficulty distinguishing between different behavioral functions, leading to inaccurate function identification. Interrater agreement on indirect assessments is often moderate at best. These limitations mean that indirect methods should generally be used as hypothesis-generating tools rather than definitive assessments, with results confirmed through direct observation or experimental methods before guiding intervention design.

6. How does descriptive assessment validity compare to functional analysis validity?

Descriptive assessment, which involves direct observation and recording of antecedents, behaviors, and consequences in the natural environment, generally has stronger validity than indirect methods because it relies on direct observation rather than informant report. However, it has weaker validity than functional analysis because it is correlational rather than experimental. Descriptive data show which environmental events co-occur with behavior, but they cannot establish that those events are causally related to behavior. Variables that precede or follow behavior may be coincidental rather than functional. For these reasons, descriptive assessment is best used to supplement indirect assessment in generating hypotheses that can be tested through experimental functional analysis.

7. When should practitioners consider that their functional assessment may have produced invalid results?

Several indicators suggest potential validity problems: assessment results that are ambiguous or show undifferentiated responding across conditions, results that conflict with information from other assessment methods, treatment designed based on assessment results that fails to produce expected behavior change, behavior that changes in unexpected ways following intervention, and clinical observations that are inconsistent with the identified function. When any of these indicators are present, practitioners should consider reassessment rather than assuming the initial results are correct. The willingness to question assessment validity and pursue additional data when warranted is a hallmark of competent clinical practice.

8. How does multiply maintained behavior affect functional assessment validity?

Multiply maintained behavior, where a single response is reinforced by more than one consequence, presents significant validity challenges. Functional analyses that test each function in isolation may fail to detect functions that only operate in combination with other reinforcers. Assessment results may show elevated responding across multiple conditions, making it difficult to determine whether the behavior is multiply maintained or whether the assessment has poor discriminant validity. Synthesized contingencies may be particularly useful for detecting multiply maintained behavior because they allow multiple reinforcers to operate simultaneously. When multiply maintained behavior is suspected, practitioners should design comprehensive interventions that address all identified functions rather than targeting a single function.

9. What role does treatment validation play in confirming assessment validity?

Treatment validation is the process of using intervention outcomes to confirm that the functional assessment accurately identified the maintaining variables. When a function-based intervention successfully reduces problem behavior, this provides strong evidence that the assessment results were valid. When treatment fails, this may indicate that the assessment produced invalid results and that the actual maintaining variables were not correctly identified. Treatment validation should be a routine component of clinical practice, with practitioners monitoring intervention outcomes closely and being prepared to reassess when outcomes do not match expectations. This iterative process of assessment, treatment, and validation produces increasingly accurate understanding of the variables controlling behavior.

10. How can practitioners improve the validity of their functional assessments?

Several strategies improve assessment validity: using multiple assessment methods and seeking converging evidence across methods, ensuring that functional analysis conditions adequately represent the contingencies operating in the natural environment, conducting assessments with sufficient session length and number of sessions to produce stable data, controlling for extraneous variables that might confound results, considering the possibility of multiple maintaining variables, conducting interobserver agreement checks to ensure reliable data collection, and using treatment validation to confirm assessment results. Additionally, staying current with the research literature on assessment methodology helps practitioners select the most valid methods and interpret their results with appropriate sophistication.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.

Validity in Functional Assessment — Jeffrey Tiger · 3 BACB Ethics CEUs · $30

Take This Course →
📚 Browse All 60+ Free CEUs — ethics, supervision & clinical topics in The ABA Clubhouse

Research Explore the Evidence

We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Social Cognition and Coherence Testing

280 research articles with practitioner takeaways

View Research →

Measurement and Evidence Quality

279 research articles with practitioner takeaways

View Research →

Brief Behavior Assessment and Treatment Matching

252 research articles with practitioner takeaways

View Research →

Related Topics

CEU Course: Validity in Functional Assessment

3 BACB Ethics CEUs · $30 · BehaviorLive

Guide: Validity in Functional Assessment — What Every BCBA Needs to Know

Research-backed educational guide with practice recommendations

Decision Guide: Comparing Approaches

Side-by-side comparison with clinical decision framework

CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics