MLAF-CapsNet: Multi-lane atrous feature fusion capsule network with contrast limited adaptive histogram equalization for brain tumor classification from MRI images

2021 ◽  
pp. 1-18
Author(s):  
Kwabena Adu ◽  
Yongbin Yu ◽  
Jingye Cai ◽  
Patrick Kwabena Mensah ◽  
Kwabena Owusu-Agyemang

Convolutional neural networks (CNNs) for automatic classification and medical image diagnosis have recently displayed a remarkable performance. However, the CNNs fail to recognize original images rotated and oriented differently, limiting their performance. This paper presents a new capsule network (CapsNet) based framework known as the multi-lane atrous feature fusion capsule network (MLAF-CapsNet) for brain tumor type classification. The MLAF-CapsNet consists of atrous and CLAHE, where the atrous increases receptive fields and maintains spatial representation, whereas the CLAHE is used as a base layer that uses an improved adaptive histogram equalization (AHE) to enhance the input images. The proposed method is evaluated using whole-brain tumor and segmented tumor datasets. The efficiency performance of the two datasets is explored and compared. The experimental results of the MLAF-CapsNet show better accuracies (93.40% and 96.60%) and precisions (94.21% and 96.55%) in feature extraction based on the original images from the two datasets than the traditional CapsNet (78.93% and 97.30%). Based on the two datasets’ augmentation, the proposed method achieved the best accuracy (98.48% and 98.82%) and precisions (98.88% and 98.58%) in extracting features compared to the traditional CapsNet. Our results indicate that the proposed method can successfully improve brain tumor classification problems and support radiologists in medical diagnostics.

2021 ◽  
Vol 38 (4) ◽  
pp. 1171-1179
Author(s):  
Swaraja Kuraparthi ◽  
Madhavi K. Reddy ◽  
C.N. Sujatha ◽  
Himabindu Valiveti ◽  
Chaitanya Duggineni ◽  
...  

Manual tumor diagnosis from magnetic resonance images (MRIs) is a time-consuming procedure that may lead to human errors and may lead to false detection and classification of the tumor type. Therefore, to automatize the complex medical processes, a deep learning framework is proposed for brain tumor classification to ease the task of doctors for medical diagnosis. Publicly available datasets such as Kaggle and Brats are used for the analysis of brain images. The proposed model is implemented on three pre-trained Deep Convolution Neural Network architectures (DCNN) such as AlexNet, VGG16, and ResNet50. These architectures are the transfer learning methods used to extract the features from the pre-trained DCNN architecture, and the extracted features are classified by using the Support Vector Machine (SVM) classifier. Data augmentation methods are applied on Magnetic Resonance images (MRI) to avoid the network from overfitting. The proposed methodology achieves an overall accuracy of 98.28% and 97.87% without data augmentation and 99.0% and 98.86% with data augmentation for Kaggle and Brat's datasets, respectively. The Area Under Curve (AUC) for Receiver Operator Characteristic (ROC) is 0.9978 and 0.9850 for the same datasets. The result shows that ResNet50 performs best in the classification of brain tumors when compared with the other two networks.


Micromachines ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 15
Author(s):  
Guanghua Xiao ◽  
Huibin Wang ◽  
Jie Shen ◽  
Zhe Chen ◽  
Zhen Zhang ◽  
...  

Automatic brain tumor classification is a practicable means of accelerating clinical diagnosis. Recently, deep convolutional neural network (CNN) training with MRI datasets has succeeded in computer-aided diagnostic (CAD) systems. To further improve the classification performance of CNNs, there is still a difficult path forward with regards to subtle discriminative details among brain tumors. We note that the existing methods heavily rely on data-driven convolutional models while overlooking what makes a class different from the others. Our study proposes to guide the network to find exact differences among similar tumor classes. We first present a “dual suppression encoding” block tailored to brain tumor MRIs, which diverges two paths from our network to refine global orderless information and local spatial representations. The aim is to use more valuable clues for correct classes by reducing the impact of negative global features and extending the attention of salient local parts. Then we introduce a “factorized bilinear encoding” layer for feature fusion. The aim is to generate compact and discriminative representations. Finally, the synergy between these two components forms a pipeline that learns in an end-to-end way. Extensive experiments exhibited superior classification performance in qualitative and quantitative evaluation on three datasets.


Diagnostics ◽  
2020 ◽  
Vol 10 (8) ◽  
pp. 565 ◽  
Author(s):  
Muhammad Attique Khan ◽  
Imran Ashraf ◽  
Majed Alhaisoni ◽  
Robertas Damaševičius ◽  
Rafal Scherer ◽  
...  

Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.


Author(s):  
V. Deepika ◽  
T. Rajasenbagam

A brain tumor is an uncontrolled growth of abnormal brain tissue that can interfere with normal brain function. Although various methods have been developed for brain tumor classification, tumor detection and multiclass classification remain challenging due to the complex characteristics of the brain tumor. Brain tumor detection and classification are one of the most challenging and time-consuming tasks in the processing of medical images. MRI (Magnetic Resonance Imaging) is a visual imaging technique, which provides a information about the soft tissues of the human body, which helps identify the brain tumor. Proper diagnosis can prevent a patient's health to some extent. This paper presents a review of various detection and classification methods for brain tumor classification using image processing techniques.


Author(s):  
Sulharmi Irawan ◽  
Yasir Hasan ◽  
Kennedi Tampubolon

Glass reflection image displays unclear or suboptimal visuals, such as overlapping images that blend with overlapping displays, so objects in images that have information and should be able to be processed for advanced research in the field of image processing or computer graphics do not give the impression so that research can be done. Improvement of overlapping images can be separated by displaying one of the image objects, the method that can be used is the Contras Limited Adaptive Histogram Equalization (CLAHE) method. CLAHE can improve the color and appearance of objects that are not clear on the image. Images that experience cases such as glass reflection images can be increased in contrast values to separate or accentuate one of the objects contained in the image using the Contrast Limited Adaptive Histogram Equalization (CLAHE) method.Keywords: Digital Image, Glass Reflection, Contrast, CLAHE, YIQ.


Sign in / Sign up

Export Citation Format

Share Document