Assessment & Research

Calibration of observational measurement of rate of responding.

Mudford et al. (2011) · Journal of applied behavior analysis 2011
★ The Verdict

Train observers against a scripted video and you can hit rate counts within 0.1 responses per minute.

✓ Read this if BCBAs who measure response rate in classrooms, clinics, or home programs.
✗ Skip if Practitioners who only use duration or latency data.

01Research in Context

01

What this study did

The team made short videos with a known number of hand raises per minute. They asked observers to score the clips with a computer key that counted each response. Then they checked how close each observer came to the true rate.

The goal was to see if you can train people to read behavior within half a response per minute.

02

What they found

Most observers landed within 0.4 responses per minute of the real count. Half of them were even tighter, within 0.1. That level of error is small enough for most rate-based decisions you make in clinic or classroom.

Computerized continuous recording beat older stop-watch sampling every time.

03

How this fits with other research

Stolz (1977) looked at every JABA paper from 1968-1975 and saw that fewer than 25 percent checked reliability in each condition. Spanoudis et al. (2011) move past that warning by giving you a scripted yard-stick instead of just two people agreeing.

Kazdin (1977) showed that observer drift and expectancies can inflate simple agreement. Calibration against a fixed video removes those biases because the right answer is set in stone.

Locurto et al. (1980) already proved that systematic beats casual sampling. The new study adds a digital tool that reaches the same goal with tighter numbers.

04

Why it matters

If you run rate-based interventions like differential reinforcement of high rates, you need clean counts. Calibrate new staff with a two-minute reference clip before they collect real data. You will catch drift early and keep your graphs trustworthy.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one target behavior, film a two-minute clip with a known rate, and have each observer score it until error is under 0.4 resp/min.

02At a glance

Intervention
not applicable
Design
methodology paper
Sample size
10
Finding
positive

03Original abstract

The quality of measurement systems used in almost all natural sciences other than behavior analysis is usually evaluated through calibration study rather than relying on interobserver agreement. We demonstrated some of the basic features of calibration using observer-measured rates of free-operant responding from 10 scripted 10-min calibration samples on video. Five novice and 5 experienced observers recorded (on laptop computers) response samples with a priori determined response rates ranging from 0 to 8 responses per minute. Observer records were then compared with these predetermined reference values using linear regression and related graphical depiction. Results indicated that all of the observers recorded rates that were accurate to within ±0.4 responses per minute and 5 were accurate to within ±0.1 responses per minute, indicating that continuous recording of responding on computers can be highly accurate and precise. Additional research is recommended to investigate conditions that affect the quality of direct observational measurement of behavior.

Journal of applied behavior analysis, 2011 · doi:10.1901/jaba.2011.44-571