Interaction With Social Robots: Improving Gaze Toward Face but Not Necessarily Joint Attention in Children With Autism Spectrum Disorder.
Social robots pull preschoolers’ eyes to faces but do not create true joint attention.
01Research in Context
What this study did
The team brought preschoolers with autism and typical kids into a lab. Each child sat across from a small human-like robot and later across from a real adult.
While the child watched, the agent (robot or person) turned toward one of two toys. Cameras tracked where the child looked: at the face, at the toy, or away.
What they found
Both groups looked at the robot’s face more than the human’s face. But they did not follow the robot’s gaze to the toy any better.
Joint attention — looking back to the face then to the target — stayed weak with the robot. The robot grabbed eyes yet failed to share focus.
How this fits with other research
Kumazaki et al. (2018) saw the same boost: kids with autism looked longer at a simple robot than at a person. Together the two studies show robots win face gaze but still miss true joint attention.
Warren et al. (2015) found better imitation with their adaptive robot. Their robot coached action, not shared looking, so the new result does not clash; it just shifts the goal post from imitation to joint attention.
Caruana et al. (2018) tracked autistic adults and also saw joint-attention gaps that practice could shrink. The preschool data now hint these gaps start early and are not fixed by a flashy robot partner.
Why it matters
If you want to teach joint attention, a robot alone is not enough. Use the robot to pull eyes toward faces, then quickly add human-led trials that reward shifting gaze between face and object. Pair the tech hook with real-person follow-up so shared looking becomes social, not just mechanical.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Start a session with a brief robot greeting to capture face gaze, then switch to a human partner and prompt eye-to-object shifts.
02At a glance
03Original abstract
It is widely recognized that robot-based interventions for autism spectrum disorders (ASD) hold promise, but the question remains as to whether social humanoid robots could facilitate joint attention performance in children with ASD. In this study, responsive joint attention was measured under two conditions in which different agents, a human and a robot, initiated joint attention via video. The participants were 15 children with ASD (mean age: 4.96 ± 1.10 years) and 15 typically developing (TD) children (mean age: 4.53 ± 0.90 years). In addition to analyses of fixation time and gaze transitions, a longest common subsequence approach (LCS) was employed to compare participants' eye movements to a predefined logical reference sequence. The fixation of TD toward agent's face was earlier and longer than children with ASD. Moreover, TD showed a greater number of gaze transitions between agent's face and target, and higher LCS scores than children with ASD. Both groups showed more interests in the robot's face, but the robot induced a lower proportion of fixation time on the target. Meanwhile participants showed similar gaze transitions and LCS results in both conditions, suggesting that they could follow the logic of the joint attention task induced by the robot as well as human. We have discussed the implications for the effects and applications of social humanoid robots in joint attention interventions.
, 2019 · doi:10.3389/fpsyg.2019.01503