scholarly journals Deep Neural Networks Point to Mid-level Complexity of Rodent Object Vision

Author(s):  
Kasper Vinken ◽  
Hans Op de Beeck

AbstractIn the last two decades rodents have been on the rise as a dominant model for visual neuroscience. This is particularly true for earlier levels of information processing, but high-profile papers have suggested that also higher levels of processing such as invariant object recognition occur in rodents. Here we provide a quantitative and comprehensive assessment of this claim by comparing a wide range of rodent behavioral and neural data with convolutional deep neural networks. These networks have been shown to capture the richness of information processing in primates through a succession of convolutional and fully connected layers. We find that rodent object vision can be captured using low to mid-level convolutional layers only, without any convincing evidence for the need of higher layers known to simulate complex object recognition in primates. Our approach also reveals surprising insights on assumptions made before, for example, that the best performing animals would be the ones using the most complex representations – which we show to likely be incorrect. Our findings suggest a road ahead for further studies aiming at quantifying and establishing the richness of representations underlying information processing in animal models at large.

2021 ◽  
Vol 17 (3) ◽  
pp. e1008714
Author(s):  
Kasper Vinken ◽  
Hans Op de Beeck

In the last two decades rodents have been on the rise as a dominant model for visual neuroscience. This is particularly true for earlier levels of information processing, but a number of studies have suggested that also higher levels of processing such as invariant object recognition occur in rodents. Here we provide a quantitative and comprehensive assessment of this claim by comparing a wide range of rodent behavioral and neural data with convolutional deep neural networks. These networks have been shown to capture hallmark properties of information processing in primates through a succession of convolutional and fully connected layers. We find that performance rodent object vision tasks can be captured using low to mid-level convolutional layers only, without any convincing evidence for the need of higher layers known to simulate complex object recognition in primates. Our approach also reveals surprising insights on assumptions made before, for example, that the best performing animals would be the ones using the most abstract representations–which we show to likely be incorrect. Our findings suggest a road ahead for further studies aiming at quantifying and establishing the richness of representations underlying information processing in animal models at large.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-11
Author(s):  
Wentao Chen ◽  
Hailong Qiu ◽  
Jian Zhuang ◽  
Chutong Zhang ◽  
Yu Hu ◽  
...  

Deep neural networks have demonstrated their great potential in recent years, exceeding the performance of human experts in a wide range of applications. Due to their large sizes, however, compression techniques such as weight quantization and pruning are usually applied before they can be accommodated on the edge. It is generally believed that quantization leads to performance degradation, and plenty of existing works have explored quantization strategies aiming at minimum accuracy loss. In this paper, we argue that quantization, which essentially imposes regularization on weight representations, can sometimes help to improve accuracy. We conduct comprehensive experiments on three widely used applications: fully connected network for biomedical image segmentation, convolutional neural network for image classification on ImageNet, and recurrent neural network for automatic speech recognition, and experimental results show that quantization can improve the accuracy by 1%, 1.95%, 4.23% on the three applications respectively with 3.5x-6.4x memory reduction.


2020 ◽  
Author(s):  
Alexander J.E. Kell ◽  
Sophie L. Bokor ◽  
You-Nah Jeon ◽  
Tahereh Toosi ◽  
Elias B. Issa

The marmoset—a small monkey with a flat cortex—offers powerful techniques for studying neural circuits in a primate. However, it remains unclear whether brain functions typically studied in larger primates can be studied in the marmoset. Here, we asked whether the 300-gram marmosets’ perceptual and cognitive repertoire approaches human levels or is instead closer to rodents’. Using high-level visual object recognition as a testbed, we found that on the same task marmosets substantially outperformed rats and generalized far more robustly across images, all while performing ∼1000 trials/day. We then compared marmosets against the high standard of human behavior. Across the same 400 images, marmosets’ image-by-image recognition behavior was strikingly human-like—essentially as human-like as macaques’. These results demonstrate that marmosets have been substantially underestimated and that high-level abilities have been conserved across simian primates. Consequently, marmosets are a potent small model organism for visual neuroscience, and perhaps beyond.


2005 ◽  
Vol 15 (01n02) ◽  
pp. 129-135 ◽  
Author(s):  
MITSUO YOSHIDA ◽  
YASUAKI KUROE ◽  
TAKEHIRO MORI

Recently models of neural networks that can directly deal with complex numbers, complex-valued neural networks, have been proposed and several studies on their abilities of information processing have been done. Furthermore models of neural networks that can deal with quaternion numbers, which is the extension of complex numbers, have also been proposed. However they are all multilayer quaternion neural networks. This paper proposes models of fully connected recurrent quaternion neural networks, Hopfield-type quaternion neural networks. Since quaternion numbers are non-commutative on multiplication, some different models can be considered. We investigate dynamics of these proposed models from the point of view of the existence of an energy function and derive their conditions for existence.


Electronics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 78 ◽  
Author(s):  
Zidi Qin ◽  
Di Zhu ◽  
Xingwei Zhu ◽  
Xuan Chen ◽  
Yinghuan Shi ◽  
...  

As a key ingredient of deep neural networks (DNNs), fully-connected (FC) layers are widely used in various artificial intelligence applications. However, there are many parameters in FC layers, so the efficient process of FC layers is restricted by memory bandwidth. In this paper, we propose a compression approach combining block-circulant matrix-based weight representation and power-of-two quantization. Applying block-circulant matrices in FC layers can reduce the storage complexity from O ( k 2 ) to O ( k ) . By quantizing the weights into integer powers of two, the multiplications in the reference can be replaced by shift and add operations. The memory usages of models for MNIST, CIFAR-10 and ImageNet can be compressed by 171 × , 2731 × and 128 × with minimal accuracy loss, respectively. A configurable parallel hardware architecture is then proposed for processing the compressed FC layers efficiently. Without multipliers, a block matrix-vector multiplication module (B-MV) is used as the computing kernel. The architecture is flexible to support FC layers of various compression ratios with small footprint. Simultaneously, the memory access can be significantly reduced by using the configurable architecture. Measurement results show that the accelerator has a processing power of 409.6 GOPS, and achieves 5.3 TOPS/W energy efficiency at 800 MHz.


2018 ◽  
Vol 275 ◽  
pp. 1132-1139 ◽  
Author(s):  
Xiaoheng Jiang ◽  
Yanwei Pang ◽  
Xuelong Li ◽  
Jing Pan ◽  
Yinghong Xie

Sign in / Sign up

Export Citation Format

Share Document