Applied Cliplets-based half-dynamic videos as intervention learning materials to attract the attention of adolescents with autism spectrum disorder to improve their perceptions and judgments of the facial expressions and emotions of others.
Half-dynamic videos that isolate facial expressions can boost emotion-recognition accuracy in middle- and high-schoolers with ASD.
01Research in Context
What this study did
IFaso et al. (2016) made short half-dynamic videos called Cliplets.
Each clip freezes everything except the actor’s moving eyes, brows, and mouth.
Six teens with autism watched the clips and then named the emotion shown.
A multiple-baseline design tracked how many faces they read correctly.
What they found
All six students got better at spotting happy, mad, sad, and scared faces.
Their new skill held steady when the videos stopped.
The study found positive results.
How this fits with other research
Petry et al. (2007) first used short videos to teach social play to younger kids with autism.
IJ’s team keeps the video-model idea but aims it at teen emotion reading instead of child play.
Rice et al. (2015) ran a larger trial with FaceSay software and also lifted emotion scores in elementary students.
The two positive results together show computer faces help across age groups.
Han et al. (2015) looks like a clash: low-functioning teens with autism failed to read morphed faces.
The difference is severity and stimulus style.
Bora used blurry, slow-blend faces; IJ used crisp, half-moving clips that keep motion cues small and clear.
High clarity plus frozen background may let more teens succeed.
Why it matters
You can copy the Cliplets trick with free phone apps that freeze most of the frame.
Film staff or peers showing one clear emotion, lock the background, and show the short loop before social groups.
It is a low-cost way to warm up emotion reading for middle- and high-schoolers on your caseload.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Shoot a 3-second clip of a staff member smiling, freeze everything except the mouth in a free app, and show it twice before asking the student to label the feeling.
02At a glance
03Original abstract
<h4>Background</h4>Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotional expressions on other people's faces. Increasing evidence indicates that children with ASD might not recognize or understand crucial nonverbal behaviors, which likely causes them to ignore nonverbal gestures and social cues, like facial expressions, that usually aid social interaction.<h4>Objective</h4>In this study, we used software technology to create half-static and dynamic video materials to teach adolescents with ASD how to become aware of six basic facial expressions observed in real situations.<h4>Methods</h4>This intervention system provides a half-way point via a dynamic video of a specific element within a static-surrounding frame to strengthen the ability of the six adolescents with ASD to attract their attention on the relevant dynamic facial expressions and ignore irrelevant ones.<h4>Results</h4>Using a multiple baseline design across participants, we found that the intervention learning system provided a simple yet effective way for adolescents with ASD to attract their attention on the nonverbal facial cues; the intervention helped them better understand and judge others' facial emotions.<h4>Conclusion</h4>We conclude that the limited amount of information with structured and specific close-up visual social cues helped the participants improve judgments of the emotional meaning of the facial expressions of others.
, 2016 · doi:10.1186/s40064-016-2884-z