These answers draw in part from “Administering Valid Assessment Tools to Improve Clinical Prescriptions in Applied Behavior Analysis” by Quatiba Davis, M.Ed., BCBA, LABA, LBA IBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →Validity refers to whether an assessment tool accurately measures what it claims to measure in the population for which it is being used. In ABA, this means examining whether a structured skills assessment like the VB-MAPP or ABLLS-R produces scores that accurately reflect a client's actual behavioral repertoire under the conditions specified in the tool's administration guidelines. A tool can be reliable — producing consistent scores across administrations — without being valid, if those consistent scores don't actually represent the construct of interest. Behavior analysts should review the technical manual of any assessment tool they use and consider whether the normative sample, validation studies, and scope of the tool match the specific client they are assessing. Validity is not a binary property — it exists on a continuum and is always population-specific.
Start by clarifying the clinical question: are you trying to identify verbal behavior targets, adaptive living skills, social skills, or academic readiness? Different tools were developed for different purposes and different populations. Once you have identified the domain of interest, review the tools available for that domain and compare their normative samples, age ranges, psychometric properties, and ecological validity. Consider whether the tool can be administered in the natural environment or is limited to analog settings. Consider whether it requires specific training to administer. Finally, consider what your client's primary communication modality is — a tool that requires vocal responses will not generate valid data for a client who primarily communicates using AAC. When in doubt, use multiple tools from different paradigms and look for convergent findings.
A skill deficit means the behavior is not in the client's repertoire — they have never reliably emitted it and cannot do so even under optimal conditions. A performance deficit means the behavior exists in the repertoire but is not occurring under current conditions, typically because motivating operations are absent, discriminative stimuli are not present, or competing contingencies are stronger. This distinction matters enormously for assessment because the intervention is completely different: skill deficits require direct instruction and shaping, while performance deficits require changes to the antecedent and consequent conditions maintaining or suppressing the behavior. A valid assessment must be able to distinguish between these two, which typically requires testing under varied motivating conditions and in multiple environments, not just a single structured assessment session.
Most standardized skills assessments used in ABA were developed and normed on predominantly white, English-speaking, middle-class American samples. When administered to clients from different cultural or linguistic backgrounds, these tools may produce scores that reflect cultural unfamiliarity, language differences, or item content that doesn't align with the client's cultural context — not genuine skill deficits. This is a validity problem. The BACB Ethics Code 2.04 addresses the need for culturally responsive services, and this extends to assessment. Practitioners serving culturally or linguistically diverse clients should seek assessments with cross-cultural validity data, use interpreters trained in behavioral assessment when needed, supplement standardized tools with direct observation in naturalistic environments, and interpret scores in the context of the client's cultural background and lived experience.
Generally, no — not without significant caution. Pediatric assessments like the VB-MAPP were designed and normed for young children with early language delays. Applying them to adult learners may produce ceiling effects in some domains, floor effects in others, and item content that is not age-appropriate or ecologically valid for adult life contexts. For adult clients, tools like the AFLS (Assessment of Functional Living Skills), the Vineland Adaptive Behavior Scales, or domain-specific assessments developed for adult populations are more appropriate starting points. If you must use a pediatric tool with an adult client due to the severity of their disabilities, document your rationale clearly and supplement with ecologically valid observational data and caregiver report to contextualize the scores.
Insurance authorization for ABA services almost universally requires assessment data to justify the level of care recommended. This creates a direct link between assessment validity and billing integrity. If assessment data are inaccurate — due to invalid tools, non-standardized administration, or inappropriate population application — any authorization based on those data is built on a flawed foundation. Under BACB Ethics Code 6.01, behavior analysts are required to be truthful and non-deceptive in communications about their services, which includes the assessment data used to justify those services. Practitioners should ensure that the tools they use have been administered as specified, that results are interpreted within their stated limits, and that the clinical prescription derived from assessment data is defensible on clinical grounds — not just administratively convenient.
Assessment documentation should go beyond simply recording scores. A well-documented clinical file includes the tools administered, the rationale for selecting each tool, the administration conditions, any deviations from standardized procedures and the reasons for those deviations, the scores obtained, the limitations of those scores, and the specific targets identified from the assessment and the reasoning connecting assessment data to target selection. This creates a record of clinical reasoning that demonstrates compliance with Ethics Code 2.01 and 2.09, supports defensible billing, and allows other providers to understand the basis for the treatment plan if they take over the case. It also provides a baseline for measuring progress over time.
A ceiling effect occurs when a client scores at or near the maximum of a subscale, meaning the tool cannot differentiate between their performance and a hypothetical perfect performance in that domain. The assessment has no sensitivity at the upper range of that skill area. A floor effect is the opposite — the client scores at or near zero, meaning the tool cannot detect any skill they do have because all items are too advanced. Both effects produce data that are not clinically actionable. When you encounter ceiling or floor effects, the appropriate response is to identify a more sensitive or differently calibrated tool for that domain, supplement with direct observation, or develop individualized criterion-referenced probes that can detect the gradations of performance relevant to that client's programming.
The answer depends on the pace of the client's skill acquisition and the purpose of the assessment. For clients making rapid progress, re-administration every 6 months may be appropriate to identify new targets and measure mastery. For clients with slower rates of acquisition or for maintenance-focused programming, annual re-administration may be sufficient. Insurance payers often specify re-assessment intervals in their authorization criteria, but clinical judgment should drive the schedule rather than administrative timelines alone. Between formal re-administrations, practitioners should use continuous data collection to monitor skill acquisition and mastery, which provides ongoing, fine-grained information about the client's repertoire without the ceiling and floor limitations of structured tools.
Training requirements vary by tool. Some assessments, like the Vineland Adaptive Behavior Scales, specify that they must be administered by licensed professionals with specific graduate-level training. Others, like the VB-MAPP or ABLLS-R, do not have formal certification requirements but include detailed administration manuals that practitioners should study thoroughly before use. At minimum, anyone administering a structured skills assessment should have read the complete administration manual, practiced administering the tool under supervision, and received feedback on their administration technique and scoring. Ethics Code 1.05 requires practicing within scope of competence, and administering an assessment you have not been trained to use properly violates that standard — regardless of your credential level.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Administering Valid Assessment Tools to Improve Clinical Prescriptions in Applied Behavior Analysis — Quatiba Davis · 1 BACB Supervision CEUs · $30
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
280 research articles with practitioner takeaways
279 research articles with practitioner takeaways
258 research articles with practitioner takeaways
1 BACB Supervision CEUs · $30 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.