By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts
Expert disagreement in visual analysis typically stems from differences in the weight analysts assign to various data characteristics. Some analysts prioritize level changes, others focus more on trend, and still others are more influenced by variability or overlap. Additionally, analysts may differ in their threshold for what constitutes a clinically meaningful change. Training gaps also contribute, as many practitioners learned visual analysis through informal exposure rather than systematic discrimination training. These inconsistencies are addressable through structured training that develops reliable discrimination of each component skill.
The basic components include level, which is the average performance within a phase; trend, which is the direction and rate of change over time; variability, which is the degree of scatter in the data around the level and trend; immediacy of effect, which is how quickly behavior changes when conditions change; overlap between phases, which is the extent to which data in one phase falls within the range of another; and consistency of patterns across similar phases within the design. Each component provides different information about the nature and strength of the intervention effect.
Level is estimated by determining the central tendency of the data points within a phase. The most common approach is to visually identify where the majority of data points cluster along the y-axis. For data with minimal variability, this is straightforward. For more variable data, drawing a horizontal line through the approximate center of the data distribution can help. Some analysts use the mean or median of the data points as a more precise estimate. The key is to characterize the typical performance within the phase before comparing levels across phases to determine whether a meaningful change occurred.
Excessive variability occurs when data points are widely scattered with no consistent pattern, making it difficult to determine the level or trend within a phase. When variability is high, changes between phases may be difficult to distinguish from normal fluctuation. As a general guideline, if the range of data within a phase encompasses most or all of the range in the adjacent phase, the overlap makes it challenging to attribute differences to the intervention. In clinical practice, high variability often signals measurement issues, inconsistent implementation, or the influence of uncontrolled environmental variables that should be investigated.
Visual analysis relies on human judgment to evaluate graphed data patterns, while statistical analysis uses mathematical procedures to determine whether observed differences exceed what would be expected by chance. Visual analysis is inherently conservative, identifying only effects large enough to be visually apparent, while statistical methods can detect smaller effects that may not be clinically meaningful. Visual analysis allows for ongoing, dynamic evaluation as data accumulates, while many statistical methods are applied at study completion. The two approaches are complementary rather than mutually exclusive, and some practitioners now use statistical supplements to inform their visual analysis.
Immediacy of effect refers to how quickly behavior changes when a condition is introduced or removed. An immediate change, one that occurs within the first few data points of the new condition, provides stronger evidence that the condition change caused the behavior change. A gradual change could be attributable to other factors such as maturation, practice effects, or coincidental environmental changes. In clinical settings, immediacy is often less clear-cut than in controlled research, as implementation of new interventions may require time for technicians to develop fluency, but the general principle that faster changes provide stronger evidence of functional relationships still applies.
In alternating treatments designs, visual analysis focuses on the separation between data paths representing different conditions. The analyst examines whether the data paths diverge, with the intervention condition producing consistently different levels of behavior than the comparison condition. Overlap between conditions weakens the evidence for a treatment effect. The analyst should also examine whether the separation is consistent across the data series and whether any trends within conditions might account for the observed differences. These designs present unique visual analysis challenges because the rapid alternation between conditions can introduce sequence effects that must be considered.
Technology tools can supplement but should not replace human visual analysis. Automated tools can calculate effect sizes, generate trend lines, and provide statistical tests that inform the visual analysis process. However, the contextual judgment required to interpret data within clinical settings, including knowledge of implementation challenges, environmental changes, and client-specific factors, remains a distinctly human contribution. The most effective approach combines the consistency of algorithmic analysis with the contextual sensitivity of skilled human judgment. Practitioners should view technology as one input to their analysis rather than a substitute for developing their own visual analysis competence.
Continuous skill development in visual analysis can be achieved through several strategies. Regularly practicing with published data sets where expert consensus exists provides calibration opportunities. Engaging in peer review exercises where colleagues independently analyze the same data and discuss their conclusions helps identify personal biases. Staying current with the visual analysis methodology literature ensures awareness of new developments. Seeking out advanced training workshops or tutorials that focus on challenging data patterns builds proficiency with the ambiguous cases most commonly encountered in applied settings. Incorporating structured visual analysis protocols into daily clinical practice reinforces systematic rather than impressionistic evaluation habits.
Common errors include anchoring on the first or last few data points rather than evaluating the entire phase, being overly influenced by dramatic outlier data points, failing to account for baseline trend when evaluating intervention effects, overweighting level changes while underweighting variability, allowing expectations or wishes about treatment effectiveness to bias interpretation, and making global judgments based on overall graph appearance rather than systematically analyzing each component. Awareness of these common errors enables practitioners to implement self-checks that improve the accuracy and reliability of their analyses.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Mastering the Basics of Visual Analysis — CEUniverse · 2 BACB Ethics CEUs · $0
Take This Course →2 BACB Ethics CEUs · $0 · CEUniverse
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.