gaze fixation
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 27)

H-INDEX

13
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Benjamin Mathieu ◽  
Antonin Abillama ◽  
Malvina Martinez ◽  
Laurence Mouchnino ◽  
Jean Blouin

Previous studies have shown that the sensory modality used to identify the position of proprioceptive targets hidden from sight, but frequently viewed, influences the type of the body representation employed for reaching them with the finger. The question then arises as to whether this observation also applies to proprioceptive targets which are hidden from sight, and rarely, if ever, viewed. We used an established technique for pinpointing the type of body representation used for the spatial encoding of targets which consisted of assessing the effect of peripheral gaze fixation on the pointing accuracy. More precisely, an exteroceptive, visually dependent, body representation is thought to be used if gaze deviation induces a deviation of the pointing movement. Three light-emitting diodes (LEDs) were positioned at the participants' eye level at -25 deg, 0 deg and +25 deg with respect to the cyclopean eye. Without moving the head, the participant fixated the lit LED before the experimenter indicated one of the three target head positions: topmost point of the head (vertex) and two other points located at the front and back of the head. These targets were either verbal-cued or tactile-cued. The goal of the subjects (n=27) was to reach the target with their index finger. We analysed the accuracy of the movements directed to the topmost point of the head, which is a well-defined, yet out of view anatomical point. Based on the possibility of the brain to create visual representations of the body areas that remain out of view, we hypothesized that the position of the vertex is encoded using an exteroceptive body representation, both when verbally or tactile-cued. Results revealed that the pointing errors were biased in the opposite direction of gaze fixation for both verbal-cued and tactile-cued targets, suggesting the use of a vision-dependent exteroceptive body representation. The enhancement of the visual body representations by sensorimotor processes was suggested by the greater pointing accuracy when the vertex was identified by tactile stimulation compared to verbal instruction. Moreover, we found in a control condition that participants were more accurate in indicating the position of their own vertex than the vertex of other people. This result supports the idea that sensorimotor experiences increase the spatial resolution of the exteroceptive body representation. Together, our results suggest that the position of rarely viewed body parts are spatially encoded by an exteroceptive body representation and that non-visual sensorimotor processes are involved in the constructing of this representation.


2021 ◽  
Vol 11 (8) ◽  
pp. 698
Author(s):  
Ángel Michael Ortiz-Zúñiga ◽  
Olga Simó-Servat ◽  
Alba Rojano-Toimil ◽  
Julia Vázquez-de Sebastian ◽  
Carmina Castellano-Tejedor ◽  
...  

Current guidelines recommend annual screening for cognitive impairment in patients > 65 years with type 2 diabetes (T2D). The most used tool is the mini-mental state evaluation (MMSE). Retinal microperimetry is useful for detecting cognitive impairment in these patients, but there is no information regarding its usefulness as a monitoring tool. We aimed to explore the role of retinal microperimetry in the annual follow-up of the cognitive function of patients with T2D older than 65 years. Materials and Methods: Prospective observational study, comprising patients > 65 years with T2D, attended at our center between March–October 2019. A complete neuropsychological evaluation assessed the baseline cognitive status (mild cognitive impairment, MCI, or normal, NC). Retinal microperimetry (sensitivity, gaze fixation) and MMSE were performed at baseline and after 12 months. Results: Fifty-nine patients with MCI and 22 NC were identified. A significant decline in the MMSE score was observed after 12 months in the MCI group (25.74 ± 0.9 vs. 24.71 ± 1.4; p = 0.001). While no significant changes in retinal sensitivity were seen, all gaze-fixation parameters worsened at 12 months and significantly correlated with a decrease in the MMSE scores. Conclusion: Retinal microperimetry is useful for the monitoring of cognitive decline in patients > 65 years with T2D. Gaze fixation seems a more sensitive parameter for follow-up after 12 months than retinal sensitivity.


Author(s):  
Giselle Valério Teixeira da Silva ◽  
Marina Carvalho de Moraes Barros ◽  
Juliana do Carmo Azevedo Soares ◽  
Lucas Pereira Carlini ◽  
Tatiany Marcondes Heiderich ◽  
...  

Objective The study aimed to analyze the gaze fixation of pediatricians during the decision process regarding the presence/absence of pain in pictures of newborn infants. Study Design Experimental study, involving 38 pediatricians (92% females, 34.6 ± 9.0 years, 22 neonatologists) who evaluated 20 pictures (two pictures of each newborn: one at rest and one during a painful procedure), presented in random order for each participant. The Tobii-TX300 equipment tracked eye movements in four areas of interest of each picture (AOI): mouth, eyes, forehead, and nasolabial furrow. Pediatricians evaluated the intensity of pain with a verbal analogue score from 0 to 10 (0 = no pain; 10 = maximum pain). The number of pictures in which pediatricians fixed their gaze, the number of gaze fixations, and the total and average time of gaze fixations were compared among the AOI by analysis of variance (ANOVA). The visual-tracking parameters of the pictures' evaluations were also compared by ANOVA according to the pediatricians' perception of pain presence: moderate/severe (score = 6–10), mild (score = 3–5), and absent (score = 0–2). The association between the total time of gaze fixations in the AOI and pain perception was assessed by logistic regression. Results In the 20 newborn pictures, the mean number of gaze fixations was greater in the mouth, eyes, and forehead than in the nasolabial furrow. Also, the average total time of gaze fixations was greater in the mouth and forehead than in the nasolabial furrow. Controlling for the time of gaze fixation in the AOI, each additional second in the time of gaze fixation in the mouth (odds ratio [OR]: 1.26; 95% confidence interval [CI]: 1.08–1.46) and forehead (OR: 1.16; 95% CI: 1.02–1.33) was associated with an increase in the chance of moderate/severe pain presence in the neonatal facial picture. Conclusion When challenged to say whether pain is present in pictures of newborn infants' faces, pediatricians fix their gaze preferably in the mouth. The longer duration of gaze fixation in the mouth and forehead is associated with an increase perception that moderate/severe pain is present. Key Points


2021 ◽  
Author(s):  
Maya Varma ◽  
Peter Washington ◽  
Brianna Chrisman ◽  
Aaron Kline ◽  
Emilie Leblanc ◽  
...  

Autism spectrum disorder (ASD) is a widespread neurodevelopmental condition with a range of potential causes and symptoms. Children with ASD exhibit behavioral and social impairments, giving rise to the possibility of utilizing computational techniques to evaluate a child's social phenotype from home videos. Here, we use a mobile health application to collect over 11 hours of video footage depicting 95 children engaged in gameplay in a natural home environment. We utilize automated dataset annotations to analyze two social indicators that have previously been shown to differ between children with ASD and their neurotypical (NT) peers: (1) gaze fixation patterns and (2) visual scanning methods. We compare the gaze fixation and visual scanning methods utilized by children during a 90-second gameplay video in order to identify statistically-significant differences between the two cohorts; we then train an LSTM neural network in order to determine if gaze indicators could be predictive of ASD. Our work identifies one statistically significant region of fixation and one significant gaze transition pattern that differ between our two cohorts during gameplay. In addition, our deep learning model demonstrates mild predictive power in identifying ASD based on coarse annotations of gaze fixations. Ultimately, our results demonstrate the utility of game-based mobile health platforms in quantifying visual patterns and providing insights into ASD. We also show the importance of automated labeling techniques in generating large-scale datasets while simultaneously preserving the privacy of participants. Our approaches can generalize to other healthcare needs.


Author(s):  
P. N. Skonnikov ◽  
D. V. Trofimov

Abstract. Some diseases, for instance, a glaucoma, cause visual field defects. For the timely diagnostics of such defects, various methods are used. One of the state-of-the-art diagnostic methods is automated static perimetry. The method of static perimetry consists in the light sensitivity determination in different parts of the visual field using stationary objects of variable luminosity. When scanning the visual field in this way, an important factor is the control of gaze fixation at the fixation point. The greatest accuracy in determining the gaze fixation position is achieved by the method of the pupil visual tracking using a video camera.In this paper, four groups of visual tracking algorithms are considered: segmentation-based methods, correlation methods, methods based on optical flow and on weighted average. An experimental comparison of these methods was carried out using the base of video recordings obtained in the automatic static perimetry apparatus. On these videos the ground truth tracks of pupil were marked. The comparison was conducted according to two criteria: center location error and tracking length. It is shown that only the weighted average method has an acceptable tracking length.


2021 ◽  
Vol 40 (1) ◽  
pp. 45-53
Author(s):  
Yong Deok Yun ◽  
Hyung Seok Oh ◽  
Rohae Myung
Keyword(s):  

2021 ◽  
Vol 58 (1) ◽  
pp. 28-33
Author(s):  
Risako Inagaki ◽  
Hiroko Suzuki ◽  
Takashi Haseoka ◽  
Shinji Arai ◽  
Yuri Takagi ◽  
...  

GeroPsych ◽  
2020 ◽  
pp. 1-9
Author(s):  
Mengjiao Fan ◽  
Thomson W. L. Wong

Abstract. This study investigated whether errorless psychomotor training with psychological manipulation could modify visuomotor behaviors in an everyday reaching motor task for older adults, and whether its benefits could be transferrable. A group of 36 older adults (mean age = 71.06, SD = 5.29) were trained on a reaching motor task (lifting a handled mug to a target) utilizing errorless, errorful, or normal psychomotor training. Results indicated that errorless psychomotor training decreased the reaching distance away from the target and the jerkiness of acceleration during the reaching task and transfer test. Errorless psychomotor training also reduced the duration of gaze fixation as well as horizontal and vertical eye activity. Our findings implicated that errorless psychomotor training could improve movement accuracy and alleviate movement variability during reaching by older adults.


2020 ◽  
Vol 68 (4) ◽  
pp. 411-431
Author(s):  
Daniel Alcaraz Carrión ◽  
Cristóbal Pagán Cánovas ◽  
Javier Valenzuela

AbstractThis chapter will explore the embodied, enacted and embedded nature of co-speech gestures in the meaning-making process of time conceptualization. We will review three different contextualized communicative exchanges extracted from American Television interviews. First, we will offer a step-by-step form description of the different gesture realizations performed by the speakers as well as a brief description of the gaze fixation patterns. After that, we will offer a functional analysis which will interpret the gesturing patters in terms of their communicative goals on their respective communicative contexts as well as the complex interplay between verbal and non-verbal communication. The resulting interaction between speech, gesture and other bodily movements give rise to a dynamic system that allows for the construction of highly complex meanings: time co-speech gestures play a crucial role in the simulation of virtual anchors for complex mental networks that integrate conceptual and perceptual information.


2020 ◽  
Vol 2 (1) ◽  
pp. 138-212
Author(s):  
Benedict C. O. F. Fehringer

AbstractThe goal of the present study was to investigate the potential of gaze fixation patterns to reflect cognitive processing steps during test performance. Gaze movements, however, can reflect top-down and bottom-up processes. Top-down processes are the cognitive processing steps that are necessary to solve a certain test item. In contrast, bottom-up processes may be provoked by varying visual features that are not related to the item solution. To disentangle top-down and bottom-up processes in the context of spatial thinking, a new test (R-Cube-Vis Test) was developed and validated explicitly for the usage of eye tracking in three studies as long and short version. The R-Cube-Vis Test measures visualization and is conform to the linear logistic test model with six difficulty levels. All items of one level demand the same transformation steps to solve an item. The R-Cube-Vis Test was then utilized to investigate different gaze-fixation-based indicators to identify top-down and bottom-up processes. Some of the indicators were also able to predict the correctness of the answer of a single item. Gaze-related measures have a high potential to reveal cognitive processing steps during solving an item of a given difficulty level, if top-down and bottom-up processes can be segregated.


Sign in / Sign up

Export Citation Format

Share Document