Vascular Segmentation in TOF MRA Images of the Brain Using a Deep Convolutional Neural Network

Author(s):  
Renzo Phellan ◽  
Alan Peixinho ◽  
Alexandre Falcão ◽  
Nils D. Forkert
2021 ◽  
Vol 11 (3) ◽  
pp. 836-845
Author(s):  
Xiangsheng Zhang ◽  
Feng Pan ◽  
Leyuan Zhou

The diagnosis of brain diseases based on magnetic resonance imaging (MRI) is a mainstream practice. In the course of practical treatment, medical personnel observe and analyze the changes in the size, position, and shape of various brain tissues in the brain MRI image, thereby judging whether the brain tissue has been diseased, and formulating the corresponding medical plan. The conclusion drawn after observing the image will be influenced by the subjective experience of the experts and is not objective. Therefore, it has become necessary to try to avoid subjective factors interfering with the diagnosis. This paper proposes an intelligent diagnosis model based on improved deep convolutional neural network (IDCNN). This model introduces integrated support vector machine (SVM) into IDCNN. During image segmentation, if IDCNN has problems such as irrational layer settings, too many parameters, etc., it will make its segmentation accuracy low. This study made a slight adjustment to the structure of IDCNN. First, adjust the number of convolution layers and down-sampling layers in the DCNN network structure, adjust the network’s activation function, and optimize the parameters to improve IDCNN’s non-linear expression ability. Then, use the integrated SVM classifier to replace the original Softmax classifier in IDCNN to improve its classification ability. The simulation experiment results tell that compared with the model before improvement and other classic classifiers, IDCNN improves segmentation results and promote the intelligent diagnosis of brain tissue.


2020 ◽  
Author(s):  
Shan Xu ◽  
Yiyuan Zhang ◽  
Zonglei Zhen ◽  
Jia Liu

AbstractCan we recognize faces with zero experience on faces? This question is critical because it examines the role of experiences in the formation of domain-specific modules in the brain. Investigation with humans and non-human animals on this issue cannot easily dissociate the effect of the visual experience from that of the hardwired domain-specificity. Therefore the present study built a model of selective deprivation of the experience on faces with a representative deep convolutional neural network, AlexNet, by removing all images containing faces from its training stimuli. This model did not show significant deficits in face categorization and discrimination, and face-selective modules automatically emerged. However, the deprivation reduced the domain-specificity of the face module. In sum, our study provides undisputable evidence on the role of nature versus nurture in developing the domain-specific modules that domain-specificity may evolve from non-specific experience without genetic predisposition, and is further fine-tuned by domain-specific experience.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Author(s):  
André Pereira ◽  
Alexandre Pyrrho ◽  
Daniel Vanzan ◽  
Leonardo Mazza ◽  
José Gabriel Gomes

Sign in / Sign up

Export Citation Format

Share Document