A Deep Learning-Based Feature Extraction Model for Classification Brain Tumor

Author(s):  
Astha Jain ◽  
Manish Pandey ◽  
Santosh Sahu
Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Nan Zhao

In the treatment of children with autistic spectrum disorder (ASD) through music perception, the perception effect and the development of the disease are mainly reflected in the fluctuations of the electroencephalogram (EEG), which is clinically effective on the brain. There is an inaccuracy problem in electrogram judgment, and deep learning has great advantages in signal feature extraction and classification. Based on the theoretical basis of Deep Belief Network (DBN) in deep learning, this paper proposes a method that combines the optimized Restricted Boltzmann machine (RBM) feature extraction model with the softmax classification algorithm. Brain wave tracking analysis is performed on children with autism who have received different music perception treatments to improve classification accuracy and achieve the purpose of accurately judging the condition. Through continuous adjustment and optimization of the weight matrix in the model, a stable recognition model is obtained. The simulation results show that this optimization algorithm can effectively improve the recognition performance of DBN, with an accuracy of 94% in a certain environment, and has a better classification effect than other traditional classification methods.


Author(s):  
Adem Assfaw Mekonnen ◽  
Hussien Worku Seid ◽  
Sudhir Kumar Mohapatra ◽  
Srinivas Prasad

The timely prognosis of brain tumors is gambling a great role within the pretreatment of patients and keep the life of suffers. The manual classification of brain tumors is a difficult task for radiologists due to the intensity variation pixel information produced by the magnetic resonance machine and it is a very tedious task for a large number of images. A deep learning algorithm becomes a famous algorithm to conquer the problems traditional machine learning algorithms by automatically feature extraction from the input spaces and accurately detect the brain tumors. One of the most important features of deep learning is transferred a gain knowledge strategy to use small datasets. Transfer learning is explored by freezing layers and fine-tuning a pre-trained model to a recommended convolutional neural net model. The proposed model is trained using 4000 real magnetic resonance images datasets. The mean accuracy of the proposed model is found to be 98% for brain tumor classifications with mini-batch size 32 and a learning rate of 0.001.


Diagnostics ◽  
2020 ◽  
Vol 10 (8) ◽  
pp. 565 ◽  
Author(s):  
Muhammad Attique Khan ◽  
Imran Ashraf ◽  
Majed Alhaisoni ◽  
Robertas Damaševičius ◽  
Rafal Scherer ◽  
...  

Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.


2020 ◽  
Vol 39 (4) ◽  
pp. 5699-5711
Author(s):  
Shirong Long ◽  
Xuekong Zhao

The smart teaching mode overcomes the shortcomings of traditional teaching online and offline, but there are certain deficiencies in the real-time feature extraction of teachers and students. In view of this, this study uses the particle swarm image recognition and deep learning technology to process the intelligent classroom video teaching image and extracts the classroom task features in real time and sends them to the teacher. In order to overcome the shortcomings of the premature convergence of the standard particle swarm optimization algorithm, an improved strategy for multiple particle swarm optimization algorithms is proposed. In order to improve the premature problem in the search performance algorithm of PSO algorithm, this paper combines the algorithm with the useful attributes of other algorithms to improve the particle diversity in the algorithm, enhance the global search ability of the particle, and achieve effective feature extraction. The research indicates that the method proposed in this paper has certain practical effects and can provide theoretical reference for subsequent related research.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


2021 ◽  
Vol 13 (8) ◽  
pp. 1602
Author(s):  
Qiaoqiao Sun ◽  
Xuefeng Liu ◽  
Salah Bourennane

Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features.


Sign in / Sign up

Export Citation Format

Share Document