scholarly journals Convolutional Neural Network Integrated With Fuzzy Rules for Decision Making in Brain Tumor Diagnosis

Author(s):  
Pham Van Hai ◽  
Samson Eloanyi Amaechi

Conventional methods used in brain tumors detection, diagnosis, and classification such as magnetic resonance imaging and computed tomography scanning technologies are unbridged in their results. This paper presents a proposed model combination, convolutional neural networks with fuzzy rules in the detection and classification of medical imaging such as healthy brain cell and tumors brain cells. This model contributes fully on the automatic classification and detection medical imaging such as brain tumors, heart diseases, breast cancers, HIV and FLU. The experimental result of the proposed model shows overall accuracy of 97.6%, which indicates that the proposed method achieves improved performance than the other current methods in the literature such as [classification of tumors in human brain MRI using wavelet and support vector machine 94.7%, and deep convolutional neural networks with transfer learning for automated brain image classification 95.0%], uses in the detection, diagnosis, and classification of medical imaging decision supports.

Conventional methods used in brain tumors detection, diagnosis, and classification such as magnetic resonance imaging and computed tomography scanning technologies are unbridged in their results. This paper presents a proposed model combination, convolutional neural networks with fuzzy rules in the detection and classification of medical imaging such as healthy brain cell and tumors brain cells. This model contributes fully on the automatic classification and detection medical imaging such as brain tumors, heart diseases, breast cancers, HIV and FLU. The experimental result of the proposed model shows overall accuracy of 97.6%, which indicates that the proposed method achieves improved performance than the other current methods in the literature such as [classification of tumors in human brain MRI using wavelet and support vector machine 94.7%, and deep convolutional neural networks with transfer learning for automated brain image classification 95.0%], uses in the detection, diagnosis, and classification of medical imaging decision supports.


Conventional methods used in brain tumors detection, diagnosis, and classification such as magnetic resonance imaging and computed tomography scanning technologies are unbridged in their results. This paper presents a proposed model combination, convolutional neural networks with fuzzy rules in the detection and classification of medical imaging such as healthy brain cell and tumors brain cells. This model contributes fully on the automatic classification and detection medical imaging such as brain tumors, heart diseases, breast cancers, HIV and FLU. The experimental result of the proposed model shows overall accuracy of 97.6%, which indicates that the proposed method achieves improved performance than the other current methods in the literature such as [classification of tumors in human brain MRI using wavelet and support vector machine 94.7%, and deep convolutional neural networks with transfer learning for automated brain image classification 95.0%], uses in the detection, diagnosis, and classification of medical imaging decision supports.


2020 ◽  
Vol 3 (1) ◽  
pp. 445-454
Author(s):  
Celal Buğra Kaya ◽  
Alperen Yılmaz ◽  
Gizem Nur Uzun ◽  
Zeynep Hilal Kilimci

Pattern classification is related with the automatic finding of regularities in dataset through the utilization of various learning techniques. Thus, the classification of the objects into a set of categories or classes is provided. This study is undertaken to evaluate deep learning methodologies to the classification of stock patterns. In order to classify patterns that are obtained from stock charts, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long-short term memory networks (LSTMs) are employed. To demonstrate the efficiency of proposed model in categorizing patterns, hand-crafted image dataset is constructed from stock charts in Istanbul Stock Exchange and NASDAQ Stock Exchange. Experimental results show that the usage of convolutional neural networks exhibits superior classification success in recognizing patterns compared to the other deep learning methodologies.


2019 ◽  
Vol 8 (4) ◽  
pp. 160 ◽  
Author(s):  
Bingxin Liu ◽  
Ying Li ◽  
Guannan Li ◽  
Anling Liu

Spectral characteristics play an important role in the classification of oil film, but the presence of too many bands can lead to information redundancy and reduced classification accuracy. In this study, a classification model that combines spectral indices-based band selection (SIs) and one-dimensional convolutional neural networks was proposed to realize automatic oil films classification using hyperspectral remote sensing images. Additionally, for comparison, the minimum Redundancy Maximum Relevance (mRMR) was tested for reducing the number of bands. The support vector machine (SVM), random forest (RF), and Hu’s convolutional neural networks (CNN) were trained and tested. The results show that the accuracy of classifications through the one dimensional convolutional neural network (1D CNN) models surpassed the accuracy of other machine learning algorithms such as SVM and RF. The model of SIs+1D CNN could produce a relatively higher accuracy oil film distribution map within less time than other models.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yanfei Li ◽  
Xianying Feng ◽  
Yandong Liu ◽  
Xingchang Han

AbstractThis work researched apple quality identification and classification from real images containing complicated disturbance information (background was similar to the surface of the apples). This paper proposed a novel model based on convolutional neural networks (CNN) which aimed at accurate and fast grading of apple quality. Specific, complex, and useful image characteristics for detection and classification were captured by the proposed model. Compared with existing methods, the proposed model could better learn high-order features of two adjacent layers that were not in the same channel but were very related. The proposed model was trained and validated, with best training and validation accuracy of 99% and 98.98% at 2590th and 3000th step, respectively. The overall accuracy of the proposed model tested using an independent 300 apple dataset was 95.33%. The results showed that the training accuracy, overall test accuracy and training time of the proposed model were better than Google Inception v3 model and traditional imaging process method based on histogram of oriented gradient (HOG), gray level co-occurrence matrix (GLCM) features merging and support vector machine (SVM) classifier. The proposed model has great potential in Apple’s quality detection and classification.


Author(s):  
R. Niessner ◽  
H. Schilling ◽  
B. Jutzi

In recent years, there has been a significant improvement in the detection, identification and classification of objects and images using Convolutional Neural Networks. To study the potential of the Convolutional Neural Network, in this paper three approaches are investigated to train classifiers based on Convolutional Neural Networks. These approaches allow Convolutional Neural Networks to be trained on datasets containing only a few hundred training samples, which results in a successful classification. Two of these approaches are based on the concept of transfer learning. In the first approach features, created by a pretrained Convolutional Neural Network, are used for a classification using a support vector machine. In the second approach a pretrained Convolutional Neural Network gets fine-tuned on a different data set. The third approach includes the design and training for flat Convolutional Neural Networks from the scratch. The evaluation of the proposed approaches is based on a data set provided by the IEEE Geoscience and Remote Sensing Society (GRSS) which contains RGB and LiDAR data of an urban area. In this work it is shown that these Convolutional Neural Networks lead to classification results with high accuracy both on RGB and LiDAR data. Features which are derived by RGB data transferred into LiDAR data by transfer learning lead to better results in classification in contrast to RGB data. Using a neural network which contains fewer layers than common neural networks leads to the best classification results. In this framework, it can furthermore be shown that the practical application of LiDAR images results in a better data basis for classification of vehicles than the use of RGB images.


Author(s):  
Fatih Demir ◽  
Abdulkadir Sengur ◽  
Varun Bajaj

AbstractTreatment of lung diseases, which are the third most common cause of death in the world, is of great importance in the medical field. Many studies using lung sounds recorded with stethoscope have been conducted in the literature in order to diagnose the lung diseases with artificial intelligence-compatible devices and to assist the experts in their diagnosis. In this paper, ICBHI 2017 database which includes different sample frequencies, noise and background sounds was used for the classification of lung sounds. The lung sound signals were initially converted to spectrogram images by using time–frequency method. The short time Fourier transform (STFT) method was considered as time–frequency transformation. Two deep learning based approaches were used for lung sound classification. In the first approach, a pre-trained deep convolutional neural networks (CNN) model was used for feature extraction and a support vector machine (SVM) classifier was used in classification of the lung sounds. In the second approach, the pre-trained deep CNN model was fine-tuned (transfer learning) via spectrogram images for lung sound classification. The accuracies of the proposed methods were tested by using the ten-fold cross validation. The accuracies for the first and second proposed methods were 65.5% and 63.09%, respectively. The obtained accuracies were then compared with some of the existing results and it was seen that obtained scores were better than the other results.


2020 ◽  
Vol 224 (1) ◽  
pp. 191-198
Author(s):  
Xinliang Liu ◽  
Tao Ren ◽  
Hongfeng Chen ◽  
Yufeng Chen

SUMMARY In this paper, convolutional neural networks (CNNs) were used to distinguish between tectonic and non-tectonic seismicity. The proposed CNNs consisted of seven convolutional layers with small kernels and one fully connected layer, which only relied on the acoustic waveform without extracting features manually. For a single station, the accuracy of the model was 0.90, and the event accuracy could reach 0.93. The proposed model was tested using data from January 2019 to August 2019 in China. The event accuracy could reach 0.92, showing that the proposed model could distinguish between tectonic and non-tectonic seismicity.


Sign in / Sign up

Export Citation Format

Share Document