Chemoretina: An alternate approach to retinal prosthesis: Visual stimulation strategy using chemicals

Author(s):  
Anuradha V. Pai ◽  
Jayesh Bellare ◽  
Tapan K. Gandhi
Author(s):  
Ziad O. Abu-Faraj ◽  
Dany M. Abou Rjeily ◽  
Rayan W. Bou Nasreddine ◽  
Majid A. Andari ◽  
Habib H. Taok

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Eslam Mounier ◽  
Bassem Abdullah ◽  
Hani Mahdi ◽  
Seif Eldawlatly

AbstractThe Lateral Geniculate Nucleus (LGN) represents one of the major processing sites along the visual pathway. Despite its crucial role in processing visual information and its utility as one target for recently developed visual prostheses, it is much less studied compared to the retina and the visual cortex. In this paper, we introduce a deep learning encoder to predict LGN neuronal firing in response to different visual stimulation patterns. The encoder comprises a deep Convolutional Neural Network (CNN) that incorporates visual stimulus spatiotemporal representation in addition to LGN neuronal firing history to predict the response of LGN neurons. Extracellular activity was recorded in vivo using multi-electrode arrays from single units in the LGN in 12 anesthetized rats with a total neuronal population of 150 units. Neural activity was recorded in response to single-pixel, checkerboard and geometrical shapes visual stimulation patterns. Extracted firing rates and the corresponding stimulation patterns were used to train the model. The performance of the model was assessed using different testing data sets and different firing rate windows. An overall mean correlation coefficient between the actual and the predicted firing rates of 0.57 and 0.7 was achieved for the 10 ms and the 50 ms firing rate windows, respectively. Results demonstrate that the model is robust to variability in the spatiotemporal properties of the recorded neurons outperforming other examined models including the state-of-the-art Generalized Linear Model (GLM). The results indicate the potential of deep convolutional neural networks as viable models of LGN firing.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Juyeon Park ◽  
Jennifer Paff Ogle

AbstractWe explored how viewing one’s anthropometric virtual avatar would affect the viewer’s self-body perception through the comparative evaluation of self-concepts—self-esteem and self-compassion, within the framework of allocentric lock theory. We recruited 18 female adults, aged 18–21, who identified themselves to have some level of body image concerns, and who had had no clinical treatment for their body image. Participants were randomly assigned either to the experimental or control group. The experimental group participated in both body positivity program and virtual avatar program, whereas the control group attended the body positivity program, only. The results affirmed that the body positivity program served as a psychological buffer prior to the virtual avatar stimulus. After the virtual avatar experience, the participants demonstrated self-acceptance by lowering their expectation on how they should look like. The findings from exit interviews enriched the quantitative results. This study verified the mechanism of the altered processing of the stored bodily memory by the egocentric sensory input of virtual avatars, and offered practical potential of the study outcomes to be applied in various emerging fields where novel applications of virtual 3D technology are sought, such as fashion e-commerce.


Sign in / Sign up

Export Citation Format

Share Document