scholarly journals Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network

2020 ◽  
Vol 10 (6) ◽  
pp. 1999 ◽  
Author(s):  
Milica M. Badža ◽  
Marko Č. Barjaktarović

The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.

Author(s):  
Abdul Kholik ◽  
Agus Harjoko ◽  
Wahyono Wahyono

The volume density of vehicles is a problem that often occurs in every city, as for the impact of vehicle density is congestion. Classification of vehicle density levels on certain roads is required because there are at least 7 vehicle density level conditions. Monitoring conducted by the police, the Department of Transportation and the organizers of the road currently using video-based surveillance such as CCTV that is still monitored by people manually. Deep Learning is an approach of synthetic neural network-based learning machines that are actively developed and researched lately because it has succeeded in delivering good results in solving various soft-computing problems, This research uses the convolutional neural network architecture. This research tries to change the supporting parameters on the convolutional neural network to further calibrate the maximum accuracy. After the experiment changed the parameters, the classification model was tested using K-fold cross-validation, confusion matrix and model exam with data testing. On the K-fold cross-validation test with an average yield of 92.83% with a value of K (fold) = 5, model testing is done by entering data testing amounting to 100 data, the model can predict or classify correctly i.e. 81 data.


Author(s):  
Dr. Abhay E Wagh

Abstract: Now a day, with the rapid advancement in the digital contents identification, auto classification of the images is most challenging job in the computer field. Programmed comprehension and breaking down of pictures by framework is troublesome when contrasted with human visions. A Several research have been done to defeat issue in existing classification system,, yet the yield was limited distinctly to low even out picture natives. Nonetheless, those approach need with exact order of pictures. This system uses deep learning algorithm concept to achieve the desired results in this area like computer. Our framework presents Convolutional Neural Network (CNN), a machine learning algorithm is used for automatic classification the images. This system uses the Digit of MNIST data set as a bench mark for classification of gray-scale images. The gray-scale images are used for training which requires more computational power for classification of those images. Using CNN network the result is near about 98% accuracy. Our model accomplishes the high precision in grouping of images. Keywords: Convolutional Neural Network (CNN), deep learning, MINIST, Machine Learning.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 256 ◽  
Author(s):  
Jiangyong An ◽  
Wanyi Li ◽  
Maosong Li ◽  
Sanrong Cui ◽  
Huanran Yue

Drought stress seriously affects crop growth, development, and grain production. Existing machine learning methods have achieved great progress in drought stress detection and diagnosis. However, such methods are based on a hand-crafted feature extraction process, and the accuracy has much room to improve. In this paper, we propose the use of a deep convolutional neural network (DCNN) to identify and classify maize drought stress. Field drought stress experiments were conducted in 2014. The experiment was divided into three treatments: optimum moisture, light drought, and moderate drought stress. Maize images were obtained every two hours throughout the whole day by digital cameras. In order to compare the accuracy of DCNN, a comparative experiment was conducted using traditional machine learning on the same dataset. The experimental results demonstrated an impressive performance of the proposed method. For the total dataset, the accuracy of the identification and classification of drought stress was 98.14% and 95.95%, respectively. High accuracy was also achieved on the sub-datasets of the seedling and jointing stages. The identification and classification accuracy levels of the color images were higher than those of the gray images. Furthermore, the comparison experiments on the same dataset demonstrated that DCNN achieved a better performance than the traditional machine learning method (Gradient Boosting Decision Tree GBDT). Overall, our proposed deep learning-based approach is a very promising method for field maize drought identification and classification based on digital images.


2019 ◽  
Vol 23 (1) ◽  
pp. 67-77 ◽  
Author(s):  
Yao Yevenyo Ziggah ◽  
Hu Youjian ◽  
Alfonso Rodrigo Tierra ◽  
Prosper Basommi Laari

The popularity of Artificial Neural Network (ANN) methodology has been growing in a wide variety of areas in geodesy and geospatial sciences. Its ability to perform coordinate transformation between different datums has been well documented in literature. In the application of the ANN methods for the coordinate transformation, only the train-test (hold-out cross-validation) approach has usually been used to evaluate their performance. Here, the data set is divided into two disjoint subsets thus, training (model building) and testing (model validation) respectively. However, one major drawback in the hold-out cross-validation procedure is inappropriate data partitioning. Improper split of the data could lead to a high variance and bias in the results generated. Besides, in a sparse dataset situation, the hold-out cross-validation is not suitable. For these reasons, the K-fold cross-validation approach has been recommended. Consequently, this study, for the first time, explored the potential of using K-fold cross-validation method in the performance assessment of radial basis function neural network and Bursa-Wolf model under data-insufficient situation in Ghana geodetic reference network. The statistical analysis of the results revealed that incorrect data partition could lead to a false reportage on the predictive performance of the transformation model. The findings revealed that the RBFNN and Bursa-Wolf model produced a transformation accuracy of 0.229 m and 0.469 m, respectively. It was also realised that a maximum horizontal error of 0.881 m and 2.131 m was given by the RBFNN and Bursa-Wolf. The obtained results per the cadastral surveying and plan production requirement set by the Ghana Survey and Mapping Division are applicable. This study will contribute to the usage of K-fold cross-validation approach in developing countries having the same sparse dataset situation like Ghana as well as in the geodetic sciences where ANN users seldom apply the statistical resampling technique.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Yinjie Xie ◽  
Wenxin Dai ◽  
Zhenxin Hu ◽  
Yijing Liu ◽  
Chuan Li ◽  
...  

Among many improved convolutional neural network (CNN) architectures in the optical image classification, only a few were applied in synthetic aperture radar (SAR) automatic target recognition (ATR). One main reason is that direct transfer of these advanced architectures for the optical images to the SAR images easily yields overfitting due to its limited data set and less features relative to the optical images. Thus, based on the characteristics of the SAR image, we proposed a novel deep convolutional neural network architecture named umbrella. Its framework consists of two alternate CNN-layer blocks. One block is a fusion of six 3-layer paths, which is used to extract diverse level features from different convolution layers. The other block is composed of convolution layers and pooling layers are mainly utilized to reduce dimensions and extract hierarchical feature information. The combination of the two blocks could extract rich features from different spatial scale and simultaneously alleviate overfitting. The performance of the umbrella model was validated by the Moving and Stationary Target Acquisition and Recognition (MSTAR) benchmark data set. This architecture could achieve higher than 99% accuracy for the classification of 10-class targets and higher than 96% accuracy for the classification of 8 variants of the T72 tank, even in the case of diverse positions located by targets. The accuracy of our umbrella is superior to the current networks applied in the classification of MSTAR. The result shows that the umbrella architecture possesses a very robust generalization capability and will be potential for SAR-ART.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2411
Author(s):  
Davor Kolar ◽  
Dragutin Lisjak ◽  
Michał Pająk ◽  
Mihael Gudlin

Intelligent fault diagnosis can be related to applications of machine learning theories to machine fault diagnosis. Although there is a large number of successful examples, there is a gap in the optimization of the hyper-parameters of the machine learning model, which ultimately has a major impact on the performance of the model. Machine learning experts are required to configure a set of hyper-parameter values manually. This work presents a convolutional neural network based data-driven intelligent fault diagnosis technique for rotary machinery which uses model with optimized hyper-parameters and network structure. The proposed technique input raw three axes accelerometer signal as high definition 1-D data into deep learning layers with optimized hyper-parameters. Input is consisted of wide 12,800 × 1 × 3 vibration signal matrix. Model learning phase includes Bayesian optimization that optimizes hyper-parameters of the convolutional neural network. Finally, by using a Convolutional Neural Network (CNN) model with optimized hyper-parameters, classification in one of the 8 different machine states and 2 rotational speeds can be performed. This study accomplished the effective classification of different rotary machinery states in different rotational speeds using optimized convolutional artificial neural network for classification of raw three axis accelerometer signal input. Overall classification accuracy of 99.94% on evaluation set is obtained with the CNN model based on 19 layers. Additionally, more data are collected on the same machine with altered bearings to test the model for overfitting. Result of classification accuracy of 100% on second evaluation set has been achieved, proving the potential of using the proposed technique.


2021 ◽  
Vol 8 (2) ◽  
pp. 311
Author(s):  
Mohammad Farid Naufal

<p class="Abstrak">Cuaca merupakan faktor penting yang dipertimbangkan untuk berbagai pengambilan keputusan. Klasifikasi cuaca manual oleh manusia membutuhkan waktu yang lama dan inkonsistensi. <em>Computer vision</em> adalah cabang ilmu yang digunakan komputer untuk mengenali atau melakukan klasifikasi citra. Hal ini dapat membantu pengembangan <em>self autonomous machine</em> agar tidak bergantung pada koneksi internet dan dapat melakukan kalkulasi sendiri secara <em>real time</em>. Terdapat beberapa algoritma klasifikasi citra populer yaitu K-Nearest Neighbors (KNN), Support Vector Machine (SVM), dan Convolutional Neural Network (CNN). KNN dan SVM merupakan algoritma klasifikasi dari <em>Machine Learning</em> sedangkan CNN merupakan algoritma klasifikasi dari Deep Neural Network. Penelitian ini bertujuan untuk membandingkan performa dari tiga algoritma tersebut sehingga diketahui berapa gap performa diantara ketiganya. Arsitektur uji coba yang dilakukan adalah menggunakan 5 cross validation. Beberapa parameter digunakan untuk mengkonfigurasikan algoritma KNN, SVM, dan CNN. Dari hasil uji coba yang dilakukan CNN memiliki performa terbaik dengan akurasi 0.942, precision 0.943, recall 0.942, dan F1 Score 0.942.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Weather is an important factor that is considered for various decision making. Manual weather classification by humans is time consuming and inconsistent. Computer vision is a branch of science that computers use to recognize or classify images. This can help develop self-autonomous machines so that they are not dependent on an internet connection and can perform their own calculations in real time. There are several popular image classification algorithms, namely K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Convolutional Neural Network (CNN). KNN and SVM are Machine Learning classification algorithms, while CNN is a Deep Neural Networks classification algorithm. This study aims to compare the performance of that three algorithms so that the performance gap between the three is known. The test architecture is using 5 cross validation. Several parameters are used to configure the KNN, SVM, and CNN algorithms. From the test results conducted by CNN, it has the best performance with 0.942 accuracy, 0.943 precision, 0.942 recall, and F1 Score 0.942.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>


Author(s):  
Denis Sato ◽  
Adroaldo José Zanella ◽  
Ernane Xavier Costa

Vehicle-animal collisions represent a serious problem in roadway infrastructure. To avoid these roadway collisions, different mitigation systems have been applied in various regions of the world. In this article, a system for detecting animals on highways is presented using computer vision and machine learning algorithms. The models were trained to classify two groups of animals: capybaras and donkeys. Two variants of the convolutional neural network called Yolo (You only look once) were used, Yolov4 and Yolov4-tiny (a lighter version of the network). The training was carried out using pre-trained models. Detection tests were performed on 147 images. The accuracy results obtained were 84.87% and 79.87% for Yolov4 and Yolov4-tiny, respectively. The proposed system has the potential to improve road safety by reducing or preventing accidents with animals.


Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1554
Author(s):  
Philippe Germain ◽  
Armine Vardazaryan ◽  
Nicolas Padoy ◽  
Aissam Labani ◽  
Catherine Roy ◽  
...  

The automatic classification of various types of cardiomyopathies is desirable but has never been performed using a convolutional neural network (CNN). The purpose of this study was to evaluate currently available CNN models to classify cine magnetic resonance (cine-MR) images of cardiomyopathies. Method: Diastolic and systolic frames of 1200 cine-MR sequences of three categories of subjects (395 normal, 411 hypertrophic cardiomyopathy, and 394 dilated cardiomyopathy) were selected, preprocessed, and labeled. Pretrained, fine-tuned deep learning models (VGG) were used for image classification (sixfold cross-validation and double split testing with hold-out data). The heat activation map algorithm (Grad-CAM) was applied to reveal salient pixel areas leading to the classification. Results: The diastolic–systolic dual-input concatenated VGG model cross-validation accuracy was 0.982 ± 0.009. Summed confusion matrices showed that, for the 1200 inputs, the VGG model led to 22 errors. The classification of a 227-input validation group, carried out by an experienced radiologist and cardiologist, led to a similar number of discrepancies. The image preparation process led to 5% accuracy improvement as compared to nonprepared images. Grad-CAM heat activation maps showed that most misclassifications occurred when extracardiac location caught the attention of the network. Conclusions: CNN networks are very well suited and are 98% accurate for the classification of cardiomyopathies, regardless of the imaging plane, when both diastolic and systolic frames are incorporated. Misclassification is in the same range as inter-observer discrepancies in experienced human readers.


Sign in / Sign up

Export Citation Format

Share Document