Multi-purpose prediction of the various edge cut twisted tape insert characteristics: multilayer perceptron network modeling

Author(s):  
Mohammad Mahdi Tafarroj ◽  
Golnaz Zarabian Ghaeini ◽  
Javad Abolfazli Esfahani ◽  
Kyung Chun Kim
Author(s):  
Julio Fernández-Ceniceros ◽  
Andrés Sanz-García ◽  
Fernando Antoñanzas-Torres ◽  
F. Javier Martínez-de-Pisón-Ascacibar

2021 ◽  
Author(s):  
Parisa Abedi Khoozani ◽  
Vishal Bharmauria ◽  
Adrian Schuetz ◽  
Richard P. Wildes ◽  
John Douglas Crawford

Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are segregated initially, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a Convolutional Neural Network (CNN) of the visual system with a Multilayer Perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings, and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric-egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.


Sign in / Sign up

Export Citation Format

Share Document