scholarly journals Representational Content of Oscillatory Brain Activity during Object Recognition: Contrasting Cortical and Deep Neural Network Hierarchies

eNeuro ◽  
2021 ◽  
pp. ENEURO.0362-20.2021
Author(s):  
Leila Reddy ◽  
Radoslaw Martin Cichy ◽  
Rufin VanRullen
Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 529 ◽  
Author(s):  
Hui Zeng ◽  
Bin Yang ◽  
Xiuqing Wang ◽  
Jiwei Liu ◽  
Dongmei Fu

With the development of low-cost RGB-D (Red Green Blue-Depth) sensors, RGB-D object recognition has attracted more and more researchers’ attention in recent years. The deep learning technique has become popular in the field of image analysis and has achieved competitive results. To make full use of the effective identification information in the RGB and depth images, we propose a multi-modal deep neural network and a DS (Dempster Shafer) evidence theory based RGB-D object recognition method. First, the RGB and depth images are preprocessed and two convolutional neural networks are trained, respectively. Next, we perform multi-modal feature learning using the proposed quadruplet samples based objective function to fine-tune the network parameters. Then, two probability classification results are obtained using two sigmoid SVMs (Support Vector Machines) with the learned RGB and depth features. Finally, the DS evidence theory based decision fusion method is used for integrating the two classification results. Compared with other RGB-D object recognition methods, our proposed method adopts two fusion strategies: Multi-modal feature learning and DS decision fusion. Both the discriminative information of each modality and the correlation information between the two modalities are exploited. Extensive experimental results have validated the effectiveness of the proposed method.


2018 ◽  
Vol 119 (4) ◽  
pp. 1251-1253 ◽  
Author(s):  
Randolph F. Helfrich

Our continuous perception of the world could be the result of discrete sampling, where individual snapshots are seamlessly fused into a coherent stream. It has been argued that endogenous oscillatory brain activity could provide the functional substrate of cortical rhythmic sampling. A new study demonstrates that cortical rhythmic sampling is tightly linked to the oculomotor system, thus providing a novel perspective on the neural network underlying top-down guided visual perception.


2019 ◽  
Vol 5 (5) ◽  
pp. eaav7903 ◽  
Author(s):  
Khaled Nasr ◽  
Pooja Viswanathan ◽  
Andreas Nieder

Humans and animals have a “number sense,” an innate capability to intuitively assess the number of visual items in a set, its numerosity. This capability implies that mechanisms to extract numerosity indwell the brain’s visual system, which is primarily concerned with visual object recognition. Here, we show that network units tuned to abstract numerosity, and therefore reminiscent of real number neurons, spontaneously emerge in a biologically inspired deep neural network that was merely trained on visual object recognition. These numerosity-tuned units underlay the network’s number discrimination performance that showed all the characteristics of human and animal number discriminations as predicted by the Weber-Fechner law. These findings explain the spontaneous emergence of the number sense based on mechanisms inherent to the visual system.


2018 ◽  
Vol 18 (10) ◽  
pp. 419
Author(s):  
Alban Flachot ◽  
Karl Gegenfurtner

Author(s):  
Cem Uran ◽  
Alina Peter ◽  
Andreea Lazar ◽  
William Barnes ◽  
Johanna Klon-Lipok ◽  
...  

AbstractFeedforward deep neural networks for object recognition are a promising model of visual processing and can accurately predict firing-rate responses along the ventral stream. Yet, these networks have limitations as models of various aspects of cortical processing related to recurrent connectivity, including neuronal synchronization and the integration of sensory inputs with spatio-temporal context. We trained self-supervised, generative neural networks to predict small regions of natural images based on the spatial context (i.e. inpainting). Using these network predictions, we determined the spatial predictability of visual inputs into (macaque) V1 receptive fields (RFs), and distinguished low- from high-level predictability. Spatial predictability strongly modulated V1 activity, with distinct effects on firing rates and synchronization in gamma-(30-80Hz) and beta-bands (18-30Hz). Furthermore, firing rates, but not synchronization, were accurately predicted by a deep neural network for object recognition. Neural networks trained to specifically predict V1 gamma-band synchronization developed large, grating-like RFs in the deepest layer. These findings suggest complementary roles for firing rates and synchronization in self-supervised learning of natural-image statistics.


Sign in / Sign up

Export Citation Format

Share Document