scholarly journals The relative coding strength of color and form in the human ventral visual pathway and convolutional neural networks

2021 ◽  
Vol 21 (9) ◽  
pp. 1845
Author(s):  
JohnMark Taylor ◽  
Yaoda Xu
2021 ◽  
Vol 15 ◽  
Author(s):  
Leonard Elia van Dyck ◽  
Roland Kwitt ◽  
Sebastian Jochen Denzler ◽  
Walter Roland Gruber

Deep convolutional neural networks (DCNNs) and the ventral visual pathway share vast architectural and functional similarities in visual challenges such as object recognition. Recent insights have demonstrated that both hierarchical cascades can be compared in terms of both exerted behavior and underlying activation. However, these approaches ignore key differences in spatial priorities of information processing. In this proof-of-concept study, we demonstrate a comparison of human observers (N = 45) and three feedforward DCNNs through eye tracking and saliency maps. The results reveal fundamentally different resolutions in both visualization methods that need to be considered for an insightful comparison. Moreover, we provide evidence that a DCNN with biologically plausible receptive field sizes called vNet reveals higher agreement with human viewing behavior as contrasted with a standard ResNet architecture. We find that image-specific factors such as category, animacy, arousal, and valence have a direct link to the agreement of spatial object recognition priorities in humans and DCNNs, while other measures such as difficulty and general image properties do not. With this approach, we try to open up new perspectives at the intersection of biological and computer vision research.


2019 ◽  
Vol 39 (33) ◽  
pp. 6513-6525 ◽  
Author(s):  
Stefania Bracci ◽  
J. Brendan Ritchie ◽  
Ioannis Kalfas ◽  
Hans P. Op de Beeck

2021 ◽  
Vol 11 (5) ◽  
pp. 1364-1371
Author(s):  
Ching Wai Yong ◽  
Kareen Teo ◽  
Belinda Pingguan Murphy ◽  
Yan Chai Hum ◽  
Khin Wee Lai

In recent decades, convolutional neural networks (CNNs) have delivered promising results in vision-related tasks across different domains. Previous studies have introduced deeper network architectures to further improve the performances of object classification, localization, and segmentation. However, this induces the complexity in mapping network’s layer to the processing elements in the ventral visual pathway. Although CORnet models are not precisely biomimetic, they are closer approximations to the anatomy of ventral visual pathway compared with other deep neural networks. The uniqueness of this architecture inspires us to extend it into a core object segmentation network, CORSegnet-Z. This architecture utilizes CORnet-Z building blocks as the encoding elements. We train and evaluate the proposed model using two large datasets. Our proposed model shows significant improvements on the segmentation metrics in delineating cartilage tissues from knee magnetic resonance (MR) images and segmenting lesion boundary from dermoscopic images.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document