Diagnosis of Brain Tumor Using Nano Segmentation and Advanced-cnn Classification

2020 ◽  
Author(s):  
DEEPA P V ◽  
Joseph Jawhar S ◽  
Mary Geisa

Abstract Background: In recent days, the field of nano-technology is becoming popular due to increased rate of accurate detection and effectiveness in clinical patients using Computer-Aided Diagnosis (CAD). The high rate of precision with accuracy and classification of brain tumor as benign or malignant can be achieved with nano-scale imaging technology. This helps to enhance the superiority of life for brain tumor diseased patients. Results: In this work, we propose the novel Semantic nano-segmentation for the detection of brain tumors even in nano scale range. Proposed Semantic Nano-segmentation based on Advanced - Convolutional Neural Networks will help the radiologists to find the brain cancer even at early stages with nodules at very smaller size. The proposed method of Advanced - Convolutional Neural Networks (A-CNN) uses ResNet-50. Here the nano-image is taken as input and tumor image is segmented using Semantic Nano-segmentation which carries an average dice and SSIM values to be 0.2133 and 0.9704 respectively. The accuracy of 93.2% and 92.7% is obtained by the proposed Semantic nano segmentation for benign and malignant tumor images respectively. A-CNN method of automatic classification has an average accuracy of 99.57% and 95.7% for benign and malignant images respectively. Conclusion: This novel nano-method is created for effective detection of tumor area in nanometers (nm) and thus evaluates the disease perfectly. Closeness of the Proposed method at ROC curve with reference to True Positive values indicates higher performance than other methods. Comparative analysis on ResNet-50 with testing and training data at rate of 90% -10%, 80%-20% and 70%-30% respectively is made which proves the effectiveness of the proposed work.

2019 ◽  
Vol 11 (12) ◽  
pp. 1461 ◽  
Author(s):  
Husam A. H. Al-Najjar ◽  
Bahareh Kalantar ◽  
Biswajeet Pradhan ◽  
Vahideh Saeidi ◽  
Alfian Abdul Halin ◽  
...  

In recent years, remote sensing researchers have investigated the use of different modalities (or combinations of modalities) for classification tasks. Such modalities can be extracted via a diverse range of sensors and images. Currently, there are no (or only a few) studies that have been done to increase the land cover classification accuracy via unmanned aerial vehicle (UAV)–digital surface model (DSM) fused datasets. Therefore, this study looks at improving the accuracy of these datasets by exploiting convolutional neural networks (CNNs). In this work, we focus on the fusion of DSM and UAV images for land use/land cover mapping via classification into seven classes: bare land, buildings, dense vegetation/trees, grassland, paved roads, shadows, and water bodies. Specifically, we investigated the effectiveness of the two datasets with the aim of inspecting whether the fused DSM yields remarkable outcomes for land cover classification. The datasets were: (i) only orthomosaic image data (Red, Green and Blue channel data), and (ii) a fusion of the orthomosaic image and DSM data, where the final classification was performed using a CNN. CNN, as a classification method, is promising due to hierarchical learning structure, regulating and weight sharing with respect to training data, generalization, optimization and parameters reduction, automatic feature extraction and robust discrimination ability with high performance. The experimental results show that a CNN trained on the fused dataset obtains better results with Kappa index of ~0.98, an average accuracy of 0.97 and final overall accuracy of 0.98. Comparing accuracies between the CNN with DSM result and the CNN without DSM result for the overall accuracy, average accuracy and Kappa index revealed an improvement of 1.2%, 1.8% and 1.5%, respectively. Accordingly, adding the heights of features such as buildings and trees improved the differentiation between vegetation specifically where plants were dense.


Author(s):  
Halima El Hamdaoui ◽  
Anass Benfares ◽  
Saïd Boujraf ◽  
Nour El Houda Chaoui ◽  
Badreddine Alami ◽  
...  

In this article, we proposed an intelligent clinical decision support system for the detection and classification of brain tumor from risk of malignancy index (RMI) images. To overcome the lack of labeled training data needed to train convolutional neural networks, we have used a deep transfer learning and stacking concepts. For this, we choosed seven convolutional neural networks (CNN) architectures already pre-trained on an ImageNet dataset that we precisely fit on magnetic resonance imaging (MRI) of brain tumors collected from the brain tumor segmentation (BraTS) 19 database. To improve the accuracy of our global model, we only predict as output the prediction that obtained the maximum score among the predictions of the seven pre-trained CNNs. We used a 10-way cross-validation approach to assess the performance of our main 2-class model: low-grade glioma (LGG) and high-grade glioma (HGG) brain tumors. A comparison of the results of our proposed model with those published in the literature, shows that our proposed model is more efficient than those published with an average test precision of 98.67%, an average f1 score of 98.62%, a test precision average of 98.06% and an average test sensitivity of 98.33%.


Computation ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 35
Author(s):  
Hind R. Mohammed ◽  
Zahir M. Hussain

Accurate, fast, and automatic detection and classification of animal images is challenging, but it is much needed for many real-life applications. This paper presents a hybrid model of Mamdani Type-2 fuzzy rules and convolutional neural networks (CNNs) applied to identify and distinguish various animals using different datasets consisting of about 27,307 images. The proposed system utilizes fuzzy rules to detect the image and then apply the CNN model for the object’s predicate category. The CNN model was trained and tested based on more than 21,846 pictures of animals. The experiments’ results of the proposed method offered high speed and efficiency, which could be a prominent aspect in designing image-processing systems based on Type 2 fuzzy rules characterization for identifying fixed and moving images. The proposed fuzzy method obtained an accuracy rate for identifying and recognizing moving objects of 98% and a mean square error of 0.1183464 less than other studies. It also achieved a very high rate of correctly predicting malicious objects equal to recall = 0.98121 and a precision rate of 1. The test’s accuracy was evaluated using the F1 Score, which obtained a high percentage of 0.99052.


2017 ◽  
Vol 2 ◽  
pp. 24-33 ◽  
Author(s):  
Musbah Zaid Enweiji ◽  
Taras Lehinevych ◽  
Аndrey Glybovets

Cross language classification is an important task in multilingual learning, where documents in different languages often share the same set of categories. The main goal is to reduce the labeling cost of training classification model for each individual language. The novel approach by using Convolutional Neural Networks for multilingual language classification is proposed in this article. It learns representation of knowledge gained from languages. Moreover, current method works for new individual language, which was not used in training. The results of empirical study on large dataset of 21 languages demonstrate robustness and competitiveness of the presented approach.


Author(s):  
Y. A. Lumban-Gaol ◽  
K. A. Ohori ◽  
R. Y. Peters

Abstract. Satellite-Derived Bathymetry (SDB) has been used in many applications related to coastal management. SDB can efficiently fill data gaps obtained from traditional measurements with echo sounding. However, it still requires numerous training data, which is not available in many areas. Furthermore, the accuracy problem still arises considering the linear model could not address the non-relationship between reflectance and depth due to bottom variations and noise. Convolutional Neural Networks (CNN) offers the ability to capture the connection between neighbouring pixels and the non-linear relationship. These CNN characteristics make it compelling to be used for shallow water depth extraction. We investigate the accuracy of different architectures using different window sizes and band combinations. We use Sentinel-2 Level 2A images to provide reflectance values, and Lidar and Multi Beam Echo Sounder (MBES) datasets are used as depth references to train and test the model. A set of Sentinel-2 and in-situ depth subimage pairs are extracted to perform CNN training. The model is compared to the linear transform and applied to two other study areas. Resulting accuracy ranges from 1.3 m to 1.94 m, and the coefficient of determination reaches 0.94. The SDB model generated using a window size of 9x9 indicates compatibility with the reference depths, especially at areas deeper than 15 m. The addition of both short wave infrared bands to the four visible bands in training improves the overall accuracy of SDB. The implementation of the pre-trained model to other study areas provides similar results depending on the water conditions.


Geophysics ◽  
2021 ◽  
pp. 1-45
Author(s):  
Runhai Feng ◽  
Dario Grana ◽  
Niels Balling

Segmentation of faults based on seismic images is an important step in reservoir characterization. With the recent developments of deep-learning methods and the availability of massive computing power, automatic interpretation of seismic faults has become possible. The likelihood of occurrence for a fault can be quantified using a sigmoid function. Our goal is to quantify the fault model uncertainty that is generally not captured by deep-learning tools. We propose to use the dropout approach, a regularization technique to prevent overfitting and co-adaptation in hidden units, to approximate the Bayesian inference and estimate the principled uncertainty over functions. Particularly, the variance of the learned model has been decomposed into aleatoric and epistemic parts. The proposed method is applied to a real dataset from the Netherlands F3 block with two different dropout ratios in convolutional neural networks. The aleatoric uncertainty is irreducible since it relates to the stochastic dependency within the input observations. As the number of Monte-Carlo realizations increases, the epistemic uncertainty asymptotically converges and the model standard deviation decreases, because the variability of model parameters is better simulated or explained with a larger sample size. This analysis can quantify the confidence to use fault predictions with less uncertainty. Additionally, the analysis suggests where more training data are needed to reduce the uncertainty in low confidence regions.


Sign in / Sign up

Export Citation Format

Share Document