This cluster shows how different people look at the same ABA graphs and often see different things. It tells BCBAs to add simple math checks, like Kappa scores, to make sure everyone agrees on what the data say. When graphs have steep lines or lots of ups and downs, people argue more, so making cleaner graphs helps. Using these tips means you can trust your visual check and be sure you are making the right choice for your client.
Common questions from BCBAs and RBTs
Graphs with high variability, unusual axis scaling, or unclear phase changes make agreement harder. Research shows that even trained analysts disagree often, especially on graphs from functional analyses. Adding a numeric reliability check helps reduce this problem.
Interobserver agreement (IOA) measures how often two data collectors record the same thing at the same time. High IOA means your data are reliable. Low IOA means the behavior is being defined or recorded differently, which can make your intervention decisions unreliable.
Masked visual analysis means the analyst reviews a graph without knowing which phase is baseline and which is treatment. This removes expectation bias. Research shows it produces similar reliability to standard analysis and can catch cases where analysts are being influenced by what they hope to see.
A moving average smooths out day-to-day variability by averaging a few data points at a time. This helps you spot longer trends and cyclical patterns that are hidden by normal noise in daily data.
Use F1 or precision-recall when the behavior is rare or when occurrence or non-occurrence rates are very unequal. Percentage agreement can be artificially high in those cases even when analysts are disagreeing on actual occurrences.