Assessing cognitive aspects of anxiety. Stability over time and agreement between several methods.
Cognitive anxiety scores swing week-to-week and rarely agree across tools—double-check before you act.
01Research in Context
What this study did
Carr et al. (1985) tracked three ways to measure anxious thoughts in adults with agoraphobia. They gave the same people the same tests every week for six weeks.
Each visit the team used a questionnaire, a checklist, and a short interview. They wanted to see if the scores stayed the same over time and if the three tools agreed with each other.
What they found
All three scores bounced up and down. One week a client looked calm on paper, the next week the same client scored high on worry.
The tools only matched modestly. A high score on the questionnaire did not always mean a high score on the interview.
How this fits with other research
Kalyva (2010) saw the same low agreement when she compared parent, teacher, and child ratings of social skills in Asperger kids. Different raters, different story.
Freeth et al. (2019) found the same mess with ASD kids. The CBCL form missed internalizing problems that the ABC form caught. Pick the tool, pick the answer.
Geckeler et al. (2000) adds a twist. They showed that heart-rate, not self-report, predicted return of claustrophobic fear. Objective beats subjective when scores are shaky.
Why it matters
Before you trust any single anxiety score, probe at least twice with two tools. If the numbers do not line up, keep measuring and look at bodily cues like heart-rate. Stable data beats a one-shot guess.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Run the same brief worry scale two sessions in a row; if scores shift, add a second method before you write the treatment plan.
02At a glance
03Original abstract
Four agoraphobics were assessed repeatedly with three different cognitive measures-in vivo cognitive assessment, imaginal cognitive assessment, and thought-listing procedure-to evaluate the stability and congruence of the measures. Results showed all three measures to have an unstable course across assessment sessions. In addition, several subjects evinced marked cognitive improvement across assessments, suggesting that these measures may be "reactive" in some cases. Finally, the congruence or one-to-one correspondence between two of the cognitive measures, administered in the same situation, was only modest.
Behavior modification, 1985 · doi:10.1177/01454455850091005