scholarly journals BRAIN TUMOR DETECTION USING BACKPROPAGATION NEURAL NETWORKS

2018 ◽  
Vol 3 (3) ◽  
pp. 83-88
Author(s):  
Iklas Sanubary

A study of brain tumor detection has been done by making use of backpropagation neural networks with Gray Level Co-Occurrence Matrix (GLCM) feature extraction. CT-Scan images of the brain consist of 12 normal and 13 abnormal (tumor) brain images are analyzed. The preprocessing stage begins with cropping the image to a 256 x 256 pixels picture, then converting the colored images into grayscale images, and equalizing the histogram to improve the quality of the images. GLCM is used to calculate statistical features determined by 5 parameters i.e., contrast, correlation, energy and homogeneity for each direction. In these backpropagation neural networks, the [12 2 1] architecture is used. The correlation coefficient between the target and the output for the training data is 0.999, while the correlation coefficient for the testing data is 0.959 with an accuracy of 70%. The results of this research indicate that backpropagation neural networks can be used for the detection of brain tumors.

Author(s):  
M.B. Bramarambika ◽  
◽  
M Sesha Shayee ◽  

Brain tumor is a mass that grows unevenly in the brain and directly affects human life. The mass occurs spontaneously because of the tissues surrounding the brain or the skull. There are two types of Brain tumor such as Benign and Malignant. Malignant brain tumors contain cancer cells and grow quickly and spread through to other brain and spine regions as well. Accurate and prompt diagnosis of brain tumors is essential for implementing an effective treatment of this disease. Brain images produced by the Magnetic Resonance Imaging (MRI) technique are a rich source of data for brain tumor diagnosis and treatment in the medical field. Due to the existence of a large number of features compared to the other imaging types. The performance of existing methods is inadequate considering the medical significance of the classification problem. Earlier methods relied on manually delineated tumor regions, prior to classification. This prevented them from being fully automated. The automatic algorithms developed using CNN and its variants could not achieve an influential improvement in performance. In order to overcome such an issue, the proposed one is automatic brain tumor detection system, which is “ Enhanced Convolution Neural Network (CNN) Algorithm for MRI Images” for the detection of brain tumor is useful to detect and classify the Glioma part into low Glioma and high Glioma.


2020 ◽  
Vol 2020 (1) ◽  
pp. 129-133
Author(s):  
Luigi Celona ◽  
Raimondo Schettini

Convolutional neural networks (CNNs) are used in an increasingly systematic way in a great variety of computer vision applications, including image quality assessment. However, their application to evaluate the perceived quality of images is strongly limited by the lack of adequate and consistent training data. A CNN-based framework for evaluating image quality of consumer photographs is made up of several building blocks that can be implemented in different ways. In this article, we schematically illustrate how these building blocks have been implemented and combined so far to create feasible solutions that take the most positive characteristics of CNNs while mitigating their intrinsic limitations. Some experimental results are reported to show the effectiveness of CNN-based solutions on real-world image quality datasets.


Author(s):  
Mimin Hendriani ◽  
Rais ◽  
Lilies Handayani

Backpropagation is one of the supervised training methods that causes an error in the output produced. Backpropagation neural networks will be carried out in 3 stages, namely feedforward from input training patterns, backpropagation from errors related to adjustment of weights. Updating the weight is done when the training results obtained have not been converged. The value of the goal error (MSE) is 0.0070579 which is achieved at epochs to 99994 from the provisions of 100000 iterations. Based on the plot regression, the training data resulted in a correlation coefficient value of up to 0.55321. The correlation coefficient value is concluded that the greater the R value produced, the better the level of accuracy in face identification carried out in this study


Author(s):  
D. Clermont ◽  
M. Dorozynski ◽  
D. Wittich ◽  
F. Rottensteiner

Abstract. This paper proposes several methods for training a Convolutional Neural Network (CNN) for learning the similarity between images of silk fabrics based on multiple semantic properties of the fabrics. In the context of the EU H2020 project SILKNOW (http://silknow.eu/), two variants of training were developed, one based on a Siamese CNN and one based on a triplet architecture. We propose different definitions of similarity and different loss functions for both training strategies, some of them also allowing the use of incomplete information about the training data. We assess the quality of the trained model by using the learned image features in a k-NN classification. We achieve overall accuracies of 93–95% and average F1-scores of 87–92%.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 562
Author(s):  
Marcin Kociołek ◽  
Michał Kozłowski ◽  
Antonio Cardone

The perceived texture directionality is an important, not fully explored image characteristic. In many applications texture directionality detection is of fundamental importance. Several approaches have been proposed, such as the fast Fourier-based method. We recently proposed a method based on the interpolated grey-level co-occurrence matrix (iGLCM), robust to image blur and noise but slower than the Fourier-based method. Here we test the applicability of convolutional neural networks (CNNs) to texture directionality detection. To obtain the large amount of training data required, we built a training dataset consisting of synthetic textures with known directionality and varying perturbation levels. Subsequently, we defined and tested shallow and deep CNN architectures. We present the test results focusing on the CNN architectures and their robustness with respect to image perturbations. We identify the best performing CNN architecture, and compare it with the iGLCM, the Fourier and the local gradient orientation methods. We find that the accuracy of CNN is lower, yet comparable to the iGLCM, and it outperforms the other two methods. As expected, the CNN method shows the highest computing speed. Finally, we demonstrate the best performing CNN on real-life images. Visual analysis suggests that the learned patterns generalize to real-life image data. Hence, CNNs represent a promising approach for texture directionality detection, warranting further investigation.


Author(s):  
Kailash D. Kharat ◽  
Pradyumna P. Kulkarni

MRI (Magnetic resonance Imaging) brain tumor images Classification is a difficult task due to the variance and complexity of tumors. This paper presents two Neural Network techniques for the classification of the magnetic resonance human brain images. The proposed Neural Network technique consists of three stages, namely, feature extraction, dimensionality reduction, and classification. In the first stage, we have obtained the features related with MRI images using discrete wavelet transformation (DWT). In the second stage, the features of magnetic resonance images (MRI) have been reduced using principles component analysis (PCA) to the more essential features. In the classification stage, two classifiers based on supervised machine learning have been developed. The first classifier based on feed forward artificial neural network (FF-ANN) and the second classifier based on Back-Propagation Neural Network. The classifiers have been used to classify subjects as normal or abnormal MRI brain images. Artificial Neural Networks (ANNs) have been developed for a wide range of applications such as function approximation, feature extraction, optimization, and classification. In particular, they have been developed for image enhancement, segmentation, registration, feature extraction, and object recognition and classification. Among these, object recognition and image classification is more important as it is a critical step for high-level processing such as brain tumor classification. Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), Hopfield, Cellular, and Pulse-Coupled neural networks have been used for image segmentation. These networks can be categorized into feed-forward (associative) and feedback (auto-associative) networks..


Sign in / Sign up

Export Citation Format

Share Document