scholarly journals Brain Tumor Classification & Segmentation by Using Advanced DNN, CNN & ResNet-50 Neural Networks

In the medical domain, brain image classification is an extremely challenging field. Medical images play a vital role in making the doctor's precise diagnosis and in the surgery process. Adopting intelligent algorithms makes it feasible to detect the lesions of medical images quickly, and it is especially necessary to extract features from medical images. Several studies have integrated multiple algorithms toward medical images domain. Concerning feature extraction from the medical image, a vast amount of data is analyzed to achieve processing results, helping physicians deliver more precise case diagnoses. Image processing mechanism becomes extensive usage in medical science to advance the early detection and treatment aspects. In this aspect, this paper takes tumor, and healthy images as the research object and primarily performs image processing and data augmentation process to feed the dataset to the neural networks. Deep neural networks (DNN), to date, have shown outstanding achievement in classification and segmentation tasks. Carrying this concept into consideration, in this study, we adopted a pre-trained model Resnet_50 for image analysis. The paper proposed three diverse neural networks, particularly DNN, CNN, and ResNet-50. Finally, the splitting dataset is individually assigned to each simplified neural network. Once the image is classified as a tumor accurately, the OTSU segmentation is employed to extract the tumor alone. It can be examined from the experimental outcomes that the ResNet-50 algorithm shows high accuracy 0.996, precision 1.00 with best F1 score 1.0, and minimum test losses of 0.0269 in terms of Brain tumor classification. Extensive experiments prove our offered tumor detection segmentation efficiency and accuracy. To this end, our approach is comprehensive sufficient and only requires minimum pre-and post-processing, which allows its adoption in various medical image classification & segmentation tasks.

Medical imaging classification is playing a vital role in identifying and diagnoses the diseases, which is very helpful to doctor. Conventional ways classify supported the form, color, and/or texture, most of tiny problematic areas haven’t shown in medical images, which meant less efficient classification and that has poor ability to identify disease. Advanced deep learning algorithms provide an efficient way to construct a finished model that can compute final classification labels with the raw pixels of medical images. These conventional algorithms are not sufficient for high resolution images due to small dataset size, advanced deep learning models suffer from very high computational costs and limitations in the channels and multilayers in the channels. To overcome these limitations, we proposed a new algorithm Normalized Coding Network with Multi-scale Perceptron (NCNMP), which combines high-level features and traditional features. The Architecture of the proposed model includes three stages. Training, retrieve, fuse. We examined the proposed algorithm on medical image dataset NIH2626. We got an overall image classification accuracy of 91.35, which are greater than the present methods.


Today, the use of medical images is often complicated for diagnosis process and planning of treatment. The major challenge in image processing and fusion includes data mismatching, data storage issues and security constraints. Although several techniques are being used for image processing, they lack in security parameters. Our objective is to provide an efficient method for image fusion techniques along with the security paradigms. In order to provide security, encryption standards are used. The results of improved framework give better performance and quality over existing methods in terms of security, database information, and fusion factor.


The current generation is witnessing a radical change in technology with the rise of artificial intelligence. The application of artificial intelligence on different domain indicates the widespread involvement of this technology in the years to come. One such application is on medical image classification such as brain tumor classification. The process of medical image classification involves techniques from the image processing domain to process set of MRI image data in order to extract prominent feature that eases the classification process. The classifier model learns the MRI image data to predict the occurrence of the tumor cells. The objective of this paper is to provide knowledge pertaining to various approaches implemented in the field of machine learning applied to medical image classification as preparation of the MRI dataset to a standard form is the key for developing classifier model. the paper focus to analyses different types of preprocessing methods, image segmentation, and feature extraction methodologies and inscribes to points out the astute observation for each of techniques present in image processing methodologies. As predicting tumor cells is a challenging task because of its unpredictable shape. Hence emulating an appropriate methodology to improve the accuracy and efficiency is important as it aids in constructing a classifier model that can accelerate the process of prediction and classification for the brain tumor MRI imagery.


Author(s):  
V. Deepika ◽  
T. Rajasenbagam

A brain tumor is an uncontrolled growth of abnormal brain tissue that can interfere with normal brain function. Although various methods have been developed for brain tumor classification, tumor detection and multiclass classification remain challenging due to the complex characteristics of the brain tumor. Brain tumor detection and classification are one of the most challenging and time-consuming tasks in the processing of medical images. MRI (Magnetic Resonance Imaging) is a visual imaging technique, which provides a information about the soft tissues of the human body, which helps identify the brain tumor. Proper diagnosis can prevent a patient's health to some extent. This paper presents a review of various detection and classification methods for brain tumor classification using image processing techniques.


Author(s):  
Y.A. Hamad ◽  
K.V. Simonov ◽  
A.S. Kents

The paper considers general approaches to image processing, analysis of visual data and computer vision. The main methods for detecting features and edges associated with these approaches are presented. A brief description of modern edge detection and classification algorithms suitable for isolating and characterizing the type of pathology in the lungs in medical images is also given.


2021 ◽  
Vol 11 (15) ◽  
pp. 6721
Author(s):  
Jinyeong Wang ◽  
Sanghwan Lee

In increasing manufacturing productivity with automated surface inspection in smart factories, the demand for machine vision is rising. Recently, convolutional neural networks (CNNs) have demonstrated outstanding performance and solved many problems in the field of computer vision. With that, many machine vision systems adopt CNNs to surface defect inspection. In this study, we developed an effective data augmentation method for grayscale images in CNN-based machine vision with mono cameras. Our method can apply to grayscale industrial images, and we demonstrated outstanding performance in the image classification and the object detection tasks. The main contributions of this study are as follows: (1) We propose a data augmentation method that can be performed when training CNNs with industrial images taken with mono cameras. (2) We demonstrate that image classification or object detection performance is better when training with the industrial image data augmented by the proposed method. Through the proposed method, many machine-vision-related problems using mono cameras can be effectively solved by using CNNs.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 624
Author(s):  
Stefan Rohrmanstorfer ◽  
Mikhail Komarov ◽  
Felix Mödritscher

With the always increasing amount of image data, it has become a necessity to automatically look for and process information in these images. As fashion is captured in images, the fashion sector provides the perfect foundation to be supported by the integration of a service or application that is built on an image classification model. In this article, the state of the art for image classification is analyzed and discussed. Based on the elaborated knowledge, four different approaches will be implemented to successfully extract features out of fashion data. For this purpose, a human-worn fashion dataset with 2567 images was created, but it was significantly enlarged by the performed image operations. The results show that convolutional neural networks are the undisputed standard for classifying images, and that TensorFlow is the best library to build them. Moreover, through the introduction of dropout layers, data augmentation and transfer learning, model overfitting was successfully prevented, and it was possible to incrementally improve the validation accuracy of the created dataset from an initial 69% to a final validation accuracy of 84%. More distinct apparel like trousers, shoes and hats were better classified than other upper body clothes.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 614 ◽  
Author(s):  
M Manoj krishna ◽  
M Neelima ◽  
M Harshali ◽  
M Venu Gopala Rao

The image classification is a classical problem of image processing, computer vision and machine learning fields. In this paper we study the image classification using deep learning. We use AlexNet architecture with convolutional neural networks for this purpose. Four test images are selected from the ImageNet database for the classification purpose. We cropped the images for various portion areas and conducted experiments. The results show the effectiveness of deep learning based image classification using AlexNet.  


Sign in / Sign up

Export Citation Format

Share Document