Ecological Momentary Assessment: A Systematic Review of Validity Research
Phone EMA can be spot-on or way off, so always validate it against an objective measure before you trust the numbers.
01Research in Context
What this study did
Stinson et al. (2022) hunted for every paper that checked if phone-based EMA tells the truth.
They pooled studies that compared momentary self-ratings to real measures like heart-rate watches or blood tests.
The team asked: how close are the phone scores to hard biology?
What they found
Match rates ran from almost zero to perfect.
No pattern showed why some studies hit 100% and others missed by miles.
Bottom line: a phone ping can be gold or garbage—you have to test it yourself.
How this fits with other research
Coffey et al. (2005) saw the same split in sleep: actigraphy caught severe problems that parent forms missed.
Garrison et al. (2025) gave hope—autistic teens with ID can give solid anxiety self-reports when verbal skills are strong.
Schiltz et al. (2017) backs that up, showing stable MASC scores over time for higher-functioning youth.
Together the story is: self-report works only when you prove it first.
Why it matters
Before you build a treatment plan on EMA data, run a quick validity check. Pair one day of phone prompts with an objective probe—step counter, saliva sample, or observer count. If the numbers line up, keep the phone. If they don’t, tweak the question or drop the tool. Your graphs—and your clients—deserve real data.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add one objective probe—like a step counter or 30-second observer count—next to your EMA prompt for one day and compare.
02At a glance
03Original abstract
Ecological momentary assessment (EMA) is a self-report method that involves intensive longitudinal assessment of behavior and environmental conditions during everyday activities. EMA has been used extensively in health and clinical psychology to investigate a variety of health behaviors, including substance use, eating, medication adherence, sleep, and physical activity. However, it has not been widely implemented in behavior analytic research. This is likely an example of the empirically based skepticism with which behavioral scientists view self-report measures. We reviewed studies comparing electronic, mobile EMA (mEMA) to more objective measures of health behavior to explore the validity of mEMA as a measurement tool, and to identify procedures and factors that may promote the accuracy of mEMA. We identified 32 studies that compared mEMA to more objective measures of health behavior or environmental events (e.g., biochemical measures or automated devices such as accelerometers). Results showed that the correspondence rates varied considerably across individuals, behavior, and studies (agreement rates ranged from 1.8%–100%), and no unifying variables could be identified across the studies that found high correspondence. The findings suggest that mEMA can be an accurate measurement tool, but further research should be conducted to identify procedures and variables that promote accurate responding.
Perspectives on Behavior Science, 2022 · doi:10.1007/s40614-022-00339-w