scholarly journals Integration of allocentric and egocentric visual information in a convolutional / multilayer perceptron network model of goal-directed gaze shifts

2021 ◽  
Author(s):  
Parisa Abedi Khoozani ◽  
Vishal Bharmauria ◽  
Adrian Schuetz ◽  
Richard P. Wildes ◽  
John Douglas Crawford

Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are segregated initially, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a Convolutional Neural Network (CNN) of the visual system with a Multilayer Perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings, and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric-egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.

1997 ◽  
Vol 50 (4) ◽  
pp. 726-741 ◽  
Author(s):  
Luc Proteau ◽  
Guillaume Masson

It is well known that dynamic visual information influences movement control, whereas the role played by background visual information is still largely unknown. Evidence coming mainly from eye movement and manual tracking studies indicates that background visual information modifies motion perception and might influence movement control. The goal of the present study was to test this hypothesis. Subjects had to apply pressure on a strain gauge to displace in a single action a cursor shown on a video display and to immobilize it on a target shown on the same display. In some instances, the visual background against which the cursor moved was unexpectedly perturbed in a direction opposite to (Experiment 1), or in the same direction as (Experiment 2) the cursor controlled by the subject. The results of both experiments indicated that the introduction of a visual perturbation significantly affected aiming accuracy. These results suggest that background visual information is used to evaluate the velocity of the aiming cursor, and that this perceived velocity is fed back to the control system, which uses it for on-line corrections.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Bing Liu ◽  
Qingbo Zhao ◽  
Yueqiang Jin ◽  
Jiayu Shen ◽  
Chaoyang Li

AbstractIn this paper, six types of air pollutant concentrations are taken as the research object, and the data monitored by the micro air quality detector are calibrated by the national control point measurement data. We use correlation analysis to find out the main factors affecting air quality, and then build a stepwise regression model for six types of pollutants based on 8 months of data. Taking the stepwise regression fitting value and the data monitored by the miniature air quality detector as input variables, combined with the multilayer perceptron neural network, the SRA-MLP model was obtained to correct the pollutant data. We compared the stepwise regression model, the standard multilayer perceptron neural network and the SRA-MLP model by three indicators. Whether it is root mean square error, average absolute error or average relative error, SRA-MLP model is the best model. Using the SRA-MLP model to correct the data can increase the accuracy of the self-built point data by 42.5% to 86.5%. The SRA-MLP model has excellent prediction effects on both the training set and the test set, indicating that it has good generalization ability. This model plays a positive role in scientific arrangement and promotion of miniature air quality detectors. It can be applied not only to air quality monitoring, but also to the monitoring of other environmental indicators.


2018 ◽  
Vol 120 (5) ◽  
pp. 2311-2324 ◽  
Author(s):  
Andrey R. Nikolaev ◽  
Radha Nila Meghanathan ◽  
Cees van Leeuwen

In free viewing, the eyes return to previously visited locations rather frequently, even though the attentional and memory-related processes controlling eye-movement show a strong antirefixation bias. To overcome this bias, a special refixation triggering mechanism may have to be recruited. We probed the neural evidence for such a mechanism by combining eye tracking with EEG recording. A distinctive signal associated with refixation planning was observed in the EEG during the presaccadic interval: the presaccadic potential was reduced in amplitude before a refixation compared with normal fixations. The result offers direct evidence for a special refixation mechanism that operates in the saccade planning stage of eye movement control. Once the eyes have landed on the revisited location, acquisition of visual information proceeds indistinguishably from ordinary fixations. NEW & NOTEWORTHY A substantial proportion of eye fixations in human natural viewing behavior are revisits of recently visited locations, i.e., refixations. Our recently developed methods enabled us to study refixations in a free viewing visual search task, using combined eye movement and EEG recording. We identified in the EEG a distinctive refixation-related signal, signifying a control mechanism specific to refixations as opposed to ordinary eye fixations.


Author(s):  
Julio Fernández-Ceniceros ◽  
Andrés Sanz-García ◽  
Fernando Antoñanzas-Torres ◽  
F. Javier Martínez-de-Pisón-Ascacibar

2020 ◽  
Vol 117 (20) ◽  
pp. 11178-11183
Author(s):  
Natalya Shelchkova ◽  
Martina Poletti

It is known that attention shifts prior to a saccade to start processing the saccade target before it lands in the foveola, the high-resolution region of the retina. Yet, once the target is foveated, microsaccades, tiny saccades maintaining the fixated object within the fovea, continue to occur. What is the link between these eye movements and attention? There is growing evidence that these eye movements are associated with covert shifts of attention in the visual periphery, when the attended stimuli are presented far from the center of gaze. Yet, microsaccades are primarily used to explore complex foveal stimuli and to optimize fine spatial vision in the foveola, suggesting that the influences of microsaccades on attention may predominantly impact vision at this scale. To address this question we tracked gaze position with high precision and briefly presented high-acuity stimuli at predefined foveal locations right before microsaccade execution. Our results show that visual discrimination changes prior to microsaccade onset. An enhancement occurs at the microsaccade target location. This modulation is highly selective and it is coupled with a drastic impairment at the opposite foveal location, just a few arcminutes away. This effect is strongest when stimuli are presented closer to the eye movement onset time. These findings reveal that the link between attention and microsaccades is deeper than previously thought, exerting its strongest effects within the foveola. As a result, during fixation, foveal vision is constantly being reshaped both in space and in time with the occurrence of microsaccades.


Water ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 1281
Author(s):  
Je-Chian Chen ◽  
Yu-Min Wang

The study has modeled shoreline changes by using a multilayer perceptron (MLP) neural network with the data collected from five beaches in southern Taiwan. The data included aerial survey maps of the Forestry Bureau for years 1982, 2002, and 2006, which served as predictors, while the unmanned aerial vehicle (UAV) surveyed data of 2019 served as the respondent. The MLP was configured using five different activation functions with the aim of evaluating their significance. These functions were Identity, Tahn, Logistic, Exponential, and Sine Functions. The results have shown that the performance of an MLP model may be affected by the choice of an activation function. Logistic and the Tahn activation functions outperformed the other models, with Logistic performing best in three beaches and Tahn having the rest. These findings suggest that the application of machine learning to shoreline changes should be accompanied by an extensive evaluation of the different activation functions.


Sign in / Sign up

Export Citation Format

Share Document