Factors Impacting Data Reliability: Rate and Total Behaviors
High-rate behavior and long data sheets both trash new-staff reliability—cut one or the other.
01Research in Context
What this study did
Workman et al. (2025) watched new staff try to record behavior. They looked at two things: how many behaviors the sheet asked for, and how fast the behavior happened.
The goal was to see when rookie observers start making shaky data. No kids or clients were treated; this was pure measurement homework.
What they found
More items on the data sheet lowered reliability. Very fast responses also lowered reliability. The team did not give numbers, but the pattern was clear: keep it short and slow or expect errors.
How this fits with other research
LeBlanc et al. (2003) showed that rich reinforcement makes both rate and accuracy go up. Workman flips the coin: high rate later hurts the person holding the clicker. Same variable, two opposite headaches.
K et al. (1994, 1995, 1996) tracked how responding drifts within a session when schedules are dense. Their lab work warns that rate jumps around; Workman warns that such jumps are hard for new staff to count.
Together the papers make a simple chain: dense schedules raise rate, raised rate invites observer error.
Why it matters
When you write a program with five target behaviors or a vocal that can top 200 responses per minute, split the job. Give the new RBT one behavior, or add a second observer, or use a timer to break the session into short chunks. Clean data early, clean data always.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Trim your data sheet to three or fewer targets before the new tech’s first solo session.
02At a glance
03Original abstract
ABSTRACT Direct observation and measurement of behavior are the cornerstones of effective research and practice in applied behavior analysis (ABA). Despite the importance of data collection, little research is available to guide behavior analysts on issues, such as accuracy, as it relates to data collection integrity. In this paper we examined the extent to which observer reliability is influenced by observer load (i.e., the number of behaviors being simultaneously recorded) and response rate. Results show that both load and response rate may impact the reliability of data collected by newly trained direct care staff. Implications for the design of data collection systems will be discussed, as well as additional considerations and future directions needed on the topic of data collection integrity.
Behavioral Interventions, 2025 · doi:10.1002/bin.70059