Using Video Modeling to Teach Abduction-Prevention Skills to Children with Autism Spectrum Disorder.
You need 8–11 short face clips before eye-tracking gaze scores settle down in kids with autism.
01Research in Context
What this study did
Brittany (2022) used eye-tracking to watch how kids with autism looked at faces.
The team wanted to know how many short video clips they needed before the gaze scores stopped jumping around.
They compared the kids to typically developing peers and ran the numbers until the trait was stable.
What they found
The autistic group spent less time looking at the center of the face.
It took 8 to 11 short clips before the gaze score stayed the same.
Fewer clips gave unreliable numbers that could change on the next test.
How this fits with other research
Chita-Tegmark (2016) looked at 38 earlier studies and saw the same face-avoidance pattern, so the new count lines up with old data.
Davidovitch et al. (2018) showed that gaze scores link to real-life social skills, proving the measure is both stable and meaningful.
Kong et al. (2025) added that uncorrected vision problems make kids look even less at faces, so you may need more clips if the child wears thick glasses.
Why it matters
If you test social attention with eye-tracking, run at least 8 to 11 short face clips before you trust the score.
This stops you from calling a kid “low gaze” when the number might bounce back up on clip nine.
Add a quick vision check first, and you will get cleaner data for your social-skills plan.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Load 11 short face videos into your eye-tracker and average the last 8 to get a stable baseline.
02At a glance
03Original abstract
Eye tracking provides insights into social processing deficits in autism spectrum disorder (ASD), especially in conjunction with dynamic, naturalistic free-viewing stimuli. However, the question remains whether gaze characteristics, such as preference for specific facial features, can be considered a stable individual trait, particularly in those with ASD. If so, how much data are needed for consistent estimations? To address these questions, we assessed the stability and robustness of gaze preference for facial features as incremental amounts of movie data were introduced for analysis. We trained an artificial neural network to create an object-based segmentation of naturalistic movie clips (14 s each, 7410 frames total). Thirty-three high-functioning individuals with ASD and 36 age- and IQ-equated typically developing individuals (age range: 12-30 years) viewed 22 Hollywood movie clips, each depicting a social interaction. As we evaluated combinations of one, three, five, eight, and 11 movie clips, gaze dwell times on core facial features became increasingly stable at within-subject, within-group, and between-group levels. Using a number of movie clips deemed sufficient by our analysis, we found that individuals with ASD displayed significantly less face-centered gaze (centralized on the nose; p < 0.001) but did not significantly differ from typically developing participants in eye or mouth looking times. Our findings validate gaze preference for specific facial features as a stable individual trait and highlight the possibility of misinterpretation with insufficient data. Additionally, we propose the use of a machine learning approach to stimuli segmentation to quickly and flexibly prepare dynamic stimuli for analysis. LAY SUMMARY: Using a data-driven approach to segmenting movie stimuli, we examined varying amounts of data to assess the stability of social gaze in individuals with autism spectrum disorder (ASD). We found a reduction in social fixations in participants with ASD, driven by decreased attention to the center of the face. Our findings further support the validity of gaze preference for face features as a stable individual trait when sufficient data are used.
Journal of autism and developmental disorders, 2022 · doi:10.1167/13.10.5