This guide draws in part from “Validity in Functional Assessment” by Jeffrey Tiger, Ph.D. BCBA (BehaviorLive), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →Validity in functional assessment is a critically important but often underexamined dimension of behavior-analytic practice. While behavior analysts are trained extensively in the procedures of functional assessment, from indirect methods through descriptive analysis and functional analysis, the psychometric properties of these procedures receive comparatively less attention. Understanding validity concepts as they apply to functional assessment enables practitioners to critically evaluate their assessment outcomes and make better-informed clinical decisions.
The clinical significance of assessment validity cannot be overstated. Every intervention designed by a behavior analyst is built upon the foundation of their functional assessment. If the assessment produces invalid results, meaning it fails to accurately identify the variables maintaining problem behavior, then the resulting intervention will be misaligned with the actual behavioral function. Misaligned interventions not only fail to produce behavior change; they may worsen the problem by introducing irrelevant contingencies that complicate the behavioral picture.
Discriminant validity refers to the ability of an assessment to distinguish between different behavioral functions. A functional assessment with high discriminant validity can reliably differentiate between behavior maintained by escape, attention, tangible access, and automatic reinforcement. When discriminant validity is poor, assessment results may suggest one function when the behavior is actually maintained by another, leading to function-based interventions that target the wrong maintaining variable.
Outcome validity evaluates whether assessment results lead to effective treatment. An assessment has high outcome validity when the interventions designed based on its findings successfully reduce problem behavior. Outcome validity represents the ultimate test of assessment usefulness because it directly connects assessment accuracy to clinical outcomes. An assessment that produces clear functional results but does not lead to effective intervention has limited clinical utility.
Sensitivity and specificity are complementary measures that evaluate different aspects of assessment accuracy. Sensitivity refers to the ability of an assessment to correctly identify a function when it is present, such as correctly detecting escape-maintained behavior when behavior is truly maintained by escape. Specificity refers to the ability to correctly identify when a function is absent, such as correctly ruling out attention maintenance when behavior is not maintained by attention. High sensitivity ensures that true functions are detected, while high specificity ensures that false functions are not erroneously identified.
The distinction between synthesized and isolated reinforcement contingencies during functional analyses has important implications for assessment validity. Synthesized contingencies combine multiple reinforcement variables within a single test condition, potentially providing a more ecologically valid representation of the natural environment. Isolated contingencies test each potential reinforcer separately, providing cleaner experimental control. Each approach has strengths and weaknesses in terms of the validity dimensions discussed above.
The concept of validity has a long history in psychological measurement, originating in educational and psychological testing and gradually expanding to encompass all forms of assessment. Traditional psychometric validity is typically discussed in terms of content validity, criterion-related validity, and construct validity, though more contemporary frameworks view validity as a unitary concept encompassing all evidence that supports or undermines the interpretation of assessment results.
Behavior analysts have historically approached assessment through a functional lens that prioritizes the identification of controlling variables over the measurement of underlying traits or constructs. This functional orientation means that some traditional validity concepts require translation before they can be meaningfully applied to behavioral assessment. For example, construct validity, which evaluates whether an assessment measures the theoretical construct it purports to measure, takes on a different character in behavior analysis, where the construct of interest is the functional relationship between behavior and its controlling variables rather than a hypothetical internal trait.
Functional analysis methodology emerged as a powerful assessment tool precisely because it provides experimental control over the variables hypothesized to maintain problem behavior. By manipulating antecedent and consequence variables across test conditions while measuring behavior, functional analysis produces data that support causal inferences about behavioral function. This experimental rigor gives functional analysis strong internal validity, but questions remain about external validity, or the degree to which assessment conditions represent the contingencies operating in the individual's natural environment.
The proliferation of functional assessment methods over the past several decades has created a diverse toolkit for behavior analysts, ranging from brief indirect assessments to extended multi-session functional analyses. Each method differs in its psychometric properties, and practitioners must understand these differences to select the assessment approach most appropriate for each clinical situation.
Indirect assessment methods such as interviews and rating scales are efficient and accessible but generally show lower validity than direct methods. Their primary limitation is reliance on informant report, which may be inaccurate due to observer bias, limited observation opportunities, or difficulty distinguishing between different behavioral functions. Despite these limitations, indirect methods play an important role in generating initial hypotheses about behavioral function that can be confirmed through more rigorous methods.
Descriptive assessment through direct observation addresses some limitations of indirect methods by collecting data on actual behavioral events in natural settings. However, descriptive methods are correlational rather than experimental, meaning they identify associations between behavior and environmental events without establishing causal relationships. This distinction is critical because correlation does not imply causation, and the environmental events that precede or follow behavior may not be the variables that maintain it.
The ongoing development of modified and abbreviated functional analysis procedures has been driven by the desire to make experimental analysis more feasible and efficient. While these modifications often maintain acceptable validity, practitioners should be aware that each modification involves tradeoffs between efficiency and the thoroughness of the assessment. Understanding these tradeoffs requires familiarity with the validity concepts that this course addresses.
Understanding validity concepts transforms how behavior analysts approach assessment selection, interpretation, and the connection between assessment and treatment. Practitioners who grasp these concepts are better equipped to select appropriate assessment methods, interpret results with appropriate confidence, and design interventions that are well-matched to actual behavioral function.
Assessment selection should be informed by consideration of which validity dimensions are most important for the clinical situation at hand. When the primary clinical question is whether behavior is maintained by one function versus another, discriminant validity is paramount, and practitioners should select methods with demonstrated ability to differentiate between functions. When the primary concern is ensuring that the assessment leads to an effective intervention, outcome validity takes precedence, and practitioners should select methods with established links to treatment success.
Interpreting assessment results requires understanding the limitations of the method used. When indirect assessments suggest a particular function, practitioners should recognize that the probability of error is relatively high and should seek confirmation through more rigorous methods before designing function-based interventions. When functional analysis produces clear differentiated responding across conditions, practitioners can have greater confidence in the results but should still consider whether the assessment conditions adequately represented natural contingencies.
The evaluation of sensitivity and specificity has direct implications for clinical decision-making. An assessment with high sensitivity but low specificity will correctly identify true functions but will also produce false positives, suggesting functions that are not actually operating. An assessment with high specificity but low sensitivity will rarely produce false positives but may miss true functions. Understanding these tradeoffs helps practitioners calibrate their confidence in assessment results and determine when additional assessment is warranted.
The choice between synthesized and isolated contingencies in functional analysis represents a significant clinical decision with validity implications. Synthesized contingencies, in which multiple reinforcers are available within a single test condition, may better represent the complex contingency arrangements operating in natural environments. However, they may also reduce discriminant validity by making it difficult to isolate which specific variable is maintaining behavior. Isolated contingencies provide cleaner experimental control and may produce clearer discriminant results, but they may miss behavior that is multiply maintained or that only occurs under complex contingency arrangements.
The assessment of multiply maintained behavior presents particular validity challenges. When behavior serves more than one function, assessments that evaluate each function in isolation may fail to capture the full picture. Practitioners should consider the possibility of multiple functions when assessment results are unclear or when function-based interventions targeting a single function produce only partial behavior reduction.
Following up initial assessment with treatment-based validation provides the ultimate test of assessment validity. When an intervention designed based on assessment results successfully reduces problem behavior, this provides strong evidence that the assessment correctly identified the maintaining variables. When interventions fail, practitioners should consider the possibility that the assessment was invalid and should re-evaluate rather than simply intensifying the current approach.
The documentation of assessment validity considerations strengthens clinical records and supports informed decision-making. When practitioners note the validity characteristics of their chosen assessment method, the confidence they place in their results, and the rationale for any decisions to seek additional assessment data, they create a record of thoughtful clinical reasoning that supports both current treatment and future clinical decisions.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Validity in functional assessment is directly connected to several ethical obligations outlined in the Ethics Code for Behavior Analysts (2022). Practitioners who use assessment methods with poor validity or who fail to consider the validity of their assessment results may inadvertently violate ethical standards related to assessment quality, treatment effectiveness, and client welfare.
Code 3.01 (Behavior-Analytic Assessment) requires behavior analysts to conduct assessments that are appropriate to the situation and that produce results sufficient to inform clinical decision-making. Using assessment methods with known validity limitations without acknowledging those limitations or supplementing with additional assessment represents a failure to meet this standard. Practitioners should select assessment methods based on their psychometric properties as well as their practical feasibility.
Code 2.01 (Providing Effective Treatment) connects directly to outcome validity. When assessments produce invalid results, the interventions based on those results are likely to be ineffective. By attending to assessment validity, practitioners increase the probability that their treatment recommendations will produce meaningful behavior change, fulfilling their obligation to provide effective services.
Code 2.14 (Selecting, Designing, and Implementing Behavior-Change Interventions) requires that interventions be based on assessment. This provision presumes that the assessment produces valid results; if it does not, the subsequent intervention cannot be said to be truly assessment-based. Practitioners who design function-based interventions from assessments with questionable validity are building treatment on an unstable foundation.
Code 1.05 (Practicing Within Scope of Competence) requires behavior analysts to maintain competence in their professional activities. Understanding the validity properties of the assessment methods one uses is a fundamental competency for behavior analysts. Practitioners who use functional assessment methods without understanding their strengths and limitations may be practicing beyond their competence in a meaningful sense.
Code 2.15 (Minimizing Risk of Behavior-Change Interventions) is served by valid assessment because inaccurate functional assessments can lead to interventions that are not only ineffective but potentially harmful. An intervention that applies escape extinction when behavior is actually maintained by automatic reinforcement, for example, may increase distress without addressing the actual maintaining variable. Valid assessment reduces this risk by ensuring that interventions target the correct function.
The ethical obligation to consume and evaluate research applies to the assessment literature specifically. Practitioners should stay current with research on the validity of functional assessment methods, including studies that identify limitations of commonly used procedures. This ongoing engagement with the literature supports informed assessment selection and interpretation.
Code 3.03 (Accepting Clients) is relevant because practitioners should consider whether they have the assessment capabilities needed to serve a prospective client effectively. If a client's needs require assessment methods that the practitioner is not competent to implement, ethical practice requires either developing the needed competencies, seeking consultation, or referring the client to a practitioner with appropriate expertise.
Transparency with clients and caregivers about assessment validity is an ethical obligation. When discussing assessment results and the treatment recommendations that follow, practitioners should communicate appropriate levels of confidence in their findings rather than presenting assessment results as definitively accurate. This transparency supports truly informed consent to treatment and sets appropriate expectations for treatment outcomes.
Decision-making about functional assessment should be guided by a systematic evaluation of which methods provide the best balance of validity, feasibility, and clinical utility for each individual case. No single assessment method is optimal for all situations, and practitioners must develop the judgment to select the most appropriate approach based on the specific clinical demands they face.
The assessment selection process should begin with consideration of the clinical question. If the primary question is whether problem behavior is functionally related to specific environmental variables, a functional analysis provides the strongest test. If the question is more exploratory, such as identifying which of several potential functions is most likely, a combination of indirect and descriptive methods may provide sufficient information to guide initial treatment while conserving clinical resources.
When interpreting functional analysis results, practitioners should evaluate both the pattern of responding across conditions and the magnitude of the differences. Clear differentiation between test and control conditions with large effect sizes provides strong evidence of functional control. Ambiguous results with overlapping data paths or small differences between conditions should be interpreted cautiously, as they may reflect weak functional relationships, multiple controlling variables, or assessment conditions that did not adequately capture the natural contingencies.
The evaluation of synthesized versus isolated functional analysis conditions should consider the specific behavior and the complexity of the natural environment. For behavior that appears to be maintained by a single variable in a relatively simple environment, isolated conditions may be sufficient and may provide clearer results. For behavior that occurs in complex environments with multiple potential reinforcers operating simultaneously, synthesized conditions may provide a more ecologically valid test.
Sensitivity analysis should inform decisions about when additional assessment is needed. If the assessment method used has known limitations in sensitivity for certain behavioral functions, and the clinical picture is consistent with those functions, additional assessment may be warranted even if initial results were negative. For example, if indirect assessment strongly suggests automatic reinforcement but the functional analysis alone condition does not show elevated responding, the practitioner should consider whether the alone condition adequately captured the relevant sensory consequences before concluding that automatic reinforcement is not operating.
Specificity concerns should prompt caution when assessment results suggest a particular function. If the method used has known false positive rates for certain functions, the practitioner should seek converging evidence from additional assessment methods before committing to a function-based intervention. Converging evidence from multiple methods provides stronger support for functional conclusions than results from any single method alone.
Treatment validation represents the final and most important validity check. After designing and implementing an intervention based on assessment results, practitioners should monitor behavior closely to determine whether the intervention produces the expected effects. Successful behavior reduction provides strong evidence that the assessment accurately identified the maintaining variables. Failure to reduce behavior should prompt reassessment rather than simply increasing the intensity of the current intervention.
Documenting the assessment decision-making process, including the rationale for method selection, the validity considerations that influenced interpretation, and the plans for treatment validation, creates a record of thoughtful clinical practice that serves both the current case and the practitioner's ongoing professional development.
Understanding validity in functional assessment elevates your practice from procedural competence to clinical sophistication. Knowing how to conduct a functional analysis is necessary but insufficient; understanding what the results mean, how confident you should be in them, and when additional assessment is needed is what distinguishes expert clinical practice.
Develop the habit of evaluating the validity implications of every assessment you conduct. Before beginning an assessment, consider which validity dimensions are most important for the clinical question at hand. After completing the assessment, evaluate your results in terms of discriminant validity, sensitivity, and specificity before drawing functional conclusions. After implementing treatment, use outcome data as a validity check on your assessment.
Be transparent with your team and with families about the confidence you place in your assessment results. When results are clear and well-supported, communicate that confidence. When results are ambiguous or when the assessment method used has known limitations, acknowledge that uncertainty and explain your plan for gathering additional information. This transparency builds trust and sets appropriate expectations.
Stay current with the research literature on functional assessment validity. New studies regularly contribute to our understanding of which methods work best under which conditions, and this knowledge directly affects your clinical decision-making. Prioritize professional development in assessment methodology as a core competency rather than a peripheral topic.
Consider the tradeoffs between synthesized and isolated contingencies thoughtfully for each case. Neither approach is universally superior; the best choice depends on the complexity of the natural environment, the specificity of the clinical question, and the resources available for extended assessment. Developing fluency with both approaches expands your clinical toolkit.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Validity in Functional Assessment — Jeffrey Tiger · 3 BACB Ethics CEUs · $30
Take This Course →We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
280 research articles with practitioner takeaways
279 research articles with practitioner takeaways
252 research articles with practitioner takeaways
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.