Assessment & Research

The effect of some environmental factors on interobserver agreement.

Fradenburg et al. (1995) · Research in developmental disabilities 1995
★ The Verdict

Peer presence and clear sightlines boost observer agreement—check your session setup before data collection.

✓ Read this if BCBAs who run direct observations in clinics, homes, or classrooms.
✗ Skip if Researchers who only use automated recording or wearable sensors.

01Research in Context

01

What this study did

English et al. (1995) watched how well two observers agreed while scoring the same behavior.

They changed small things in the room: sometimes a peer sat beside the client, sometimes the view was blocked, sometimes the room was noisy.

Then they checked which setups gave the highest agreement scores.

02

What they found

Agreement jumped when a peer was present and when observers could clearly see and hear the client.

Poor sightlines or loud background noise dragged scores down.

03

How this fits with other research

Jones et al. (1977) had already warned that raw percent agreement hides errors; the 1995 data prove that room layout is one hidden source of those errors.

Hausman et al. (2022) later re-examined big IOA data sets and found that once observers are well-trained, adding more sessions does not raise agreement. Their finding extends the 1995 message: fix the environment first, then stop worrying about session count.

Wolfe et al. (2023) moved from physical space to graph space. They showed that steep trends and high variability on graphs now become the “environment” that lowers visual agreement, updating the 1995 lesson for the digital age.

04

Why it matters

Before you collect even one trial, scan the room. Move chairs so both observers see the client’s face, silence the TV, and, when possible, let a peer sit in. These zero-cost tweaks give you cleaner data and fewer reliability headaches later.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Re-arrange seats so the second observer has an unobstructed view and add an empty chair for a peer if the client enjoys company.

02At a glance

Intervention
not applicable
Design
other
Finding
positive

03Original abstract

Because there is no truth criterion to measure the accuracy of behavioral recording, behavior analysts rely on interobserver-agreement scores to increase the believability of their data. This study investigated the effects of the presence and absence of a subject's peers on within-session interobserver-agreement scores. Ten variables that were components of both of the main conditions and thought to potentially affect agreement scores also were studied. The results show that interobserver agreement was significantly better in the presence of certain stimuli (i.e., when the subject's peers were present than when they were not). Of the 10 additional variables analyzed, one--Can't See--Can't Hear--correlated significantly with the differences in interobserver-agreement scores. These results suggest that experimenters need to be aware of the variations in their observers' behavior and the factors affecting it. The importance of acceptable levels of interobserver agreement for data used in making treatment decisions also is discussed.

Research in developmental disabilities, 1995 · doi:10.1016/0891-4222(95)00028-3