visual scanning
Recently Published Documents


TOTAL DOCUMENTS

345
(FIVE YEARS 67)

H-INDEX

40
(FIVE YEARS 4)

2021 ◽  
Vol 12 ◽  
Author(s):  
Przemysław Tomalski ◽  
David López Pérez ◽  
Alicja Radkowska ◽  
Anna Malinowska-Korczak

In the 1st year of life, infants gradually gain the ability to control their eye movements and explore visual scenes, which support their learning and emerging cognitive skills. These gains include domain-general skills such as rapid orienting or attention disengagement as well as domain-specific ones such as increased sensitivity to social stimuli. However, it remains unknown whether these developmental changes in what infants fixate and for how long in naturalistic scenes lead to the emergence of more complex, repeated sequences of fixations, especially when viewing human figures and faces, and whether these changes are related to improvements in domain-general attentional skills. Here we tested longitudinally the developmental changes in the complexity of fixation sequences at 5.5 and 11 months of age using Recurrence Quantification Analysis. We measured changes in how fixations recur in the same location and changes in the patterns (repeated sequences) of fixations in social and non-social scenes that were either static or dynamic. We found more complex patterns (i.e., repeated and longer sequences) of fixations in social than non-social scenes, both static and dynamic. There was also an age-related increase in the length of repeated fixation sequences only for social static scenes, which was independent of individual differences in orienting and attention disengagement. Our results can be interpreted as evidence for fine-tuning of infants' visual scanning skills. They selectively produce longer and more complex sequences of fixations on faces and bodies before reaching the end of the 1st year of life.


Safety ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. 70
Author(s):  
Olivier Lefrançois ◽  
Nadine Matton ◽  
Mickaël Causse

Poor cockpit monitoring has been identified as an important contributor to aviation accidents. Improving pilots’ monitoring strategies could therefore help to enhance flight safety. During two different sessions, we analyzed the flight performance and eye movements of professional airline pilots in a full-flight simulator. In a pre-training session, 20 pilots performed a manual approach scenario as pilot flying (PFs) and were classified into three groups according to their flight performance: unstabilized, standard, and most accurate. The unstabilized pilots either under- or over-focused various instruments. Their number of visual scanning patterns was lower than those of pilots who managed to stabilize their approach. The most accurate pilots showed a higher perceptual efficiency with shorter fixation times and more fixations on important primary flight instruments. Approximately 10 months later, fourteen pilots returned for a post-training session. They received a short training program and performed a similar manual approach as during the pre-training session. Seven of them, the experimental group, received individual feedback on their own performance and visual behavior (i.e., during the pre-training session) and a variety of data obtained from the most accurate pilots, including an eye-tracking video showing efficient visual scanning strategies from one of the most accurate pilots. The other seven, the control group, received general guidelines on cockpit monitoring. During the post-training session, the experimental group had better flight performance (compared to the control group), and its visual scanning strategies became more similar to those of the most accurate pilots. In summary, our results suggest that cockpit monitoring underlies manual flight performance and that it can be improved using a training program based mainly on exposure to eye movement examples from highly accurate pilots.


2021 ◽  
Vol 19 (9) ◽  
pp. 133-140
Author(s):  
Ji-Yong Chung ◽  
Hyeok-Min Lee ◽  
Seung–Jae Noh ◽  
Eung-Hyuk Lee

2021 ◽  
Author(s):  
Akshay Anil Dixit ◽  
Divya Sinha ◽  
Hemalatha Ramachandran

With the advancements of computer technology and accessible internet, playing video games has become immensely popular across all age groups. Increasing research talks about the cognitive benefits of Video Games. At the same time, video games are stereotyped as an activity for the lazy and unproductive. Within this backdrop, our study aims to understand the effect of video games on Executive control (Visual Scanning and Visual Perception), Aggression, and Gaming Motivation. Twenty non-gamers were selected and divided into two groups: Action Video Game Players (AVGP) and Non-Action Video Game Players NAVGP). We used two computerized tests: Gabor Orientation Identification Test and Visual Scanning Test (to assess visual perception and visual scanning, respectively) and two questionnaires (to assess aggression and gaming motivation). We found an improvement in visual perception as well as visual scanning following video game training in AVGPs. Interestingly, aggression did not increase with an increase in video game exposure. We also found insignificant changes in gaming motivation after the training, except for self-gratification motives. Cognitive improvements do not relate to action video games alone, but non-action video games also show promising results to enhance cognition. With better timed and controlled training with video games, aggression as a prospective consequence of video game exposure can also be controlled. We propose targeted video game training as an approach to enhance cognition in non-gamers.


2021 ◽  
Author(s):  
Maya Varma ◽  
Peter Washington ◽  
Brianna Chrisman ◽  
Aaron Kline ◽  
Emilie Leblanc ◽  
...  

Autism spectrum disorder (ASD) is a widespread neurodevelopmental condition with a range of potential causes and symptoms. Children with ASD exhibit behavioral and social impairments, giving rise to the possibility of utilizing computational techniques to evaluate a child's social phenotype from home videos. Here, we use a mobile health application to collect over 11 hours of video footage depicting 95 children engaged in gameplay in a natural home environment. We utilize automated dataset annotations to analyze two social indicators that have previously been shown to differ between children with ASD and their neurotypical (NT) peers: (1) gaze fixation patterns and (2) visual scanning methods. We compare the gaze fixation and visual scanning methods utilized by children during a 90-second gameplay video in order to identify statistically-significant differences between the two cohorts; we then train an LSTM neural network in order to determine if gaze indicators could be predictive of ASD. Our work identifies one statistically significant region of fixation and one significant gaze transition pattern that differ between our two cohorts during gameplay. In addition, our deep learning model demonstrates mild predictive power in identifying ASD based on coarse annotations of gaze fixations. Ultimately, our results demonstrate the utility of game-based mobile health platforms in quantifying visual patterns and providing insights into ASD. We also show the importance of automated labeling techniques in generating large-scale datasets while simultaneously preserving the privacy of participants. Our approaches can generalize to other healthcare needs.


Author(s):  
Muzahid Islam ◽  
Sudhakar Deeti ◽  
J. Frances Kamhi ◽  
Ken Cheng

Insects possess small brains but exhibit sophisticated behaviour, specifically their ability to learn to navigate within complex environments. To understand how they learn to navigate in a cluttered environment, we focused on learning and visual scanning behaviour in the Australian nocturnal bull ant, Myrmecia midas, which are exceptional visual navigators. We tested how individual ants learn to detour via a gap and how they cope with substantial spatial changes over trips. Homing M. midas ants encountered a barrier on their foraging route and had to find a 50-cm gap between symmetrical large black screens, at 1m distance towards the nest direction from the centre of the releasing platform in both familiar (on-route) and semi-familiar (off-route) environments. Foragers were tested for up to 3 learning trips with the changed conditions in both environments. Results showed that on the familiar route, individual foragers learned the gap quickly compared to when they were tested in the semi-familiar environment. When the route was less familiar, and the panorama was changed, foragers were less successful at finding the gap and performed more scans on their way home. Scene familiarity thus played a significant role in visual scanning behaviour. In both on-route and off-route environments, panoramic changes significantly affected learning, initial orientation and scanning behaviour. Nevertheless, over a few trips, success at gap finding increased, visual scans were reduced, the paths became straighter, and individuals took less time to reach the goal.


Sign in / Sign up

Export Citation Format

Share Document