scholarly journals Brain Tumour Classification Using Noble Deep Learning Approach with Parametric Optimization through Metaheuristics Approaches

Computers ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 10
Author(s):  
Dillip Ranjan Nayak ◽  
Neelamadhab Padhy ◽  
Pradeep Kumar Mallick ◽  
Dilip Kumar Bagal ◽  
Sachin Kumar

Deep learning has surged in popularity in recent years, notably in the domains of medical image processing, medical image analysis, and bioinformatics. In this study, we offer a completely autonomous brain tumour segmentation approach based on deep neural networks (DNNs). We describe a unique CNN architecture which varies from those usually used in computer vision. The classification of tumour cells is very difficult due to their heterogeneous nature. From a visual learning and brain tumour recognition point of view, a convolutional neural network (CNN) is the most extensively used machine learning algorithm. This paper presents a CNN model along with parametric optimization approaches for analysing brain tumour magnetic resonance images. The accuracy percentage in the simulation of the above-mentioned model is exactly 100% throughout the nine runs, i.e., Taguchi’s L9 design of experiment. This comparative analysis of all three algorithms will pique the interest of readers who are interested in applying these techniques to a variety of technical and medical challenges. In this work, the authors have tuned the parameters of the convolutional neural network approach, which is applied to the dataset of Brain MRIs to detect any portion of a tumour, through new advanced optimization techniques, i.e., SFOA, FBIA and MGA.

Author(s):  
Dr. K. Naveen Kumar

Abstract: Recently, a machine learning (ML) area called deep learning emerged in the computer-vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in many fields, including medical image analysis, have started actively participating in the explosively growing field of deep learning. In this paper, deep learning techniques and their applications to medical image analysis are surveyed. This survey overviewed 1) standard ML techniques in the computer-vision field, 2) what has changed in ML before and after the introduction of deep learning, 3) ML models in deep learning, and 4) applications of deep learning to medical image analysis. The comparisons between MLs before and after deep learning revealed that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is learning image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The survey of deep learningalso revealed that there is a long history of deep-learning techniques in the class of ML with image input, except a new term, “deep learning”. “Deep learning” even before the term existed, namely, the class of ML with image input was applied to various problems in medical image analysis including classification between lesions and nonlesions, classification between lesion types, segmentation of lesions or organs, and detection of lesions. ML with image input including deep learning is a verypowerful, versatile technology with higher performance, which can bring the current state-ofthe-art performance level of medical image analysis to the next level, and it is expected that deep learning will be the mainstream technology in medical image analysis in the next few decades. “Deep learning”, or ML with image input, in medical image analysis is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical image analysis in the next few decades. Keywords: Deep learning, Convolutional neural network, Massive-training artificial neural network, Computer-aided diagnosis, Medical image analysis, Classification (key words)


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Shota Ito ◽  
Yuichi Mine ◽  
Yuki Yoshimi ◽  
Saori Takeda ◽  
Akari Tanaka ◽  
...  

AbstractTemporomandibular disorders are typically accompanied by a number of clinical manifestations that involve pain and dysfunction of the masticatory muscles and temporomandibular joint. The most important subgroup of articular abnormalities in patients with temporomandibular disorders includes patients with different forms of articular disc displacement and deformation. Here, we propose a fully automated articular disc detection and segmentation system to support the diagnosis of temporomandibular disorder on magnetic resonance imaging. This system uses deep learning-based semantic segmentation approaches. The study included a total of 217 magnetic resonance images from 10 patients with anterior displacement of the articular disc and 10 healthy control subjects with normal articular discs. These images were used to evaluate three deep learning-based semantic segmentation approaches: our proposed convolutional neural network encoder-decoder named 3DiscNet (Detection for Displaced articular DISC using convolutional neural NETwork), U-Net, and SegNet-Basic. Of the three algorithms, 3DiscNet and SegNet-Basic showed comparably good metrics (Dice coefficient, sensitivity, and positive predictive value). This study provides a proof-of-concept for a fully automated deep learning-based segmentation methodology for articular discs on magnetic resonance images, and obtained promising initial results, indicating that the method could potentially be used in clinical practice for the assessment of temporomandibular disorders.


2020 ◽  
Author(s):  
Albahli Saleh ◽  
Ali Alkhalifah

BACKGROUND To diagnose cardiothoracic diseases, a chest x-ray (CXR) is examined by a radiologist. As more people get affected, doctors are becoming scarce especially in developing countries. However, with the advent of image processing tools, the task of diagnosing these cardiothoracic diseases has seen great progress. A lot of researchers have put in work to see how the problems associated with medical images can be mitigated by using neural networks. OBJECTIVE Previous works used state-of-the-art techniques and got effective results with one or two cardiothoracic diseases but could lead to misclassification. In our work, we adopted GANs to synthesize the chest radiograph (CXR) to augment the training set on multiple cardiothoracic diseases to efficiently diagnose the chest diseases in different classes as shown in Figure 1. In this regard, our major contributions are classifying various cardiothoracic diseases to detect a specific chest disease based on CXR, use the advantage of GANs to overcome the shortages of small training datasets, address the problem of imbalanced data; and implementing optimal deep neural network architecture with different hyper-parameters to improve the model with the best accuracy. METHODS For this research, we are not building a model from scratch due to computational restraints as they require very high-end computers. Rather, we use a Convolutional Neural Network (CNN) as a class of deep neural networks to propose a generative adversarial network (GAN) -based model to generate synthetic data for training the data as the amount of the data is limited. We will use pre-trained models which are models that were trained on a large benchmark dataset to solve a problem similar to the one we want to solve. For example, the ResNet-152 model we used was initially trained on the ImageNet dataset. RESULTS After successful training and validation of the models we developed, ResNet-152 with image augmentation proved to be the best model for the automatic detection of cardiothoracic disease. However, one of the main problems associated with radiographic deep learning projects and research is the scarcity and unavailability of enough datasets which is a key component of all deep learning models as they require a lot of data for training. This is the reason why some of our models had image augmentation to increase the number of images without duplication. As more data are collected in the field of chest radiology, the models could be retrained to improve the accuracies of the models as deep learning models improve with more data. CONCLUSIONS This research employs the advantages of computer vision and medical image analysis to develop an automated model that has the clinical potential for early detection of the disease. Using deep learning models, the research aims to evaluate the effectiveness and accuracy of different convolutional neural network models in the automatic diagnosis of cardiothoracic diseases from x-ray images compared to diagnosis by experts in the medical community.


2020 ◽  
Vol 64 (2) ◽  
pp. 20508-1-20508-12 ◽  
Author(s):  
Getao Du ◽  
Xu Cao ◽  
Jimin Liang ◽  
Xueli Chen ◽  
Yonghua Zhan

Abstract Medical image analysis is performed by analyzing images obtained by medical imaging systems to solve clinical problems. The purpose is to extract effective information and improve the level of clinical diagnosis. In recent years, automatic segmentation based on deep learning (DL) methods has been widely used, where a neural network can automatically learn image features, which is in sharp contrast with the traditional manual learning method. U-net is one of the most important semantic segmentation frameworks for a convolutional neural network (CNN). It is widely used in the medical image analysis domain for lesion segmentation, anatomical segmentation, and classification. The advantage of this network framework is that it can not only accurately segment the desired feature target and effectively process and objectively evaluate medical images but also help to improve accuracy in the diagnosis by medical images. Therefore, this article presents a literature review of medical image segmentation based on U-net, focusing on the successful segmentation experience of U-net for different lesion regions in six medical imaging systems. Along with the latest advances in DL, this article introduces the method of combining the original U-net architecture with deep learning and a method for improving the U-net network.


Sign in / Sign up

Export Citation Format

Share Document