scholarly journals Transfer Learning for Classification of 2D Brain MRI Images and Tumor Segmentation

2020 ◽  
Vol 8 (6) ◽  
pp. 2016-2019

The focus of the paper is to classify the images into tumorous and non-tumorous and then locate the tumor. Amongst many medical imaging applications segmentation of Brain Tumors is an important and arduous task as the data acquired is disrupted due to artifacts being produced and acquisition time being very less, so classifying and finding the exact location of tumor is one of the most important jobs. In the paper, deep learning specifically the convolutional neural network is used to demonstrate its potential for image classification task. As the learning from available dataset will be low, so we use transfer learning [4] approach, as it is a developing AI strategy that overwhelms with the best outcomes on several image classification assignments because the pre-trained models have gained good knowledge about the features by training on a large number of images. Since, medical image datasets are hard to collect so transfer learning (Alexnet) [1] is used. Later on, after successful classification the aim is to find the exact location of the tumor and this is achieved using basics of image processing inspired by well-known technique of Mask R-CNN [9].

2020 ◽  
Vol 10 (10) ◽  
pp. 3359 ◽  
Author(s):  
Ibrahem Kandel ◽  
Mauro Castelli

Accurate classification of medical images is of great importance for correct disease diagnosis. The automation of medical image classification is of great necessity because it can provide a second opinion or even a better classification in case of a shortage of experienced medical staff. Convolutional neural networks (CNN) were introduced to improve the image classification domain by eliminating the need to manually select which features to use to classify images. Training CNN from scratch requires very large annotated datasets that are scarce in the medical field. Transfer learning of CNN weights from another large non-medical dataset can help overcome the problem of medical image scarcity. Transfer learning consists of fine-tuning CNN layers to suit the new dataset. The main questions when using transfer learning are how deeply to fine-tune the network and what difference in generalization that will make. In this paper, all of the experiments were done on two histopathology datasets using three state-of-the-art architectures to systematically study the effect of block-wise fine-tuning of CNN. Results show that fine-tuning the entire network is not always the best option; especially for shallow networks, alternatively fine-tuning the top blocks can save both time and computational power and produce more robust classifiers.


Author(s):  
Sweety Maniar ◽  
Jagdish S. Shah

Medical image classification and retrieval systems have been finding extensive use in the areas of image classification according to imaging modalities, body part and diseases. One of the major challenges in the medical classification is the large size images leading to a large number of extracted features which is a burden for the classification algorithm and the resources. In this paper, a novel approach for automatic classification of fundus images is proposed. The method uses image and data pre-processing techniques to improve the performance of machine learning classifiers.<em> </em>Some predominant image mining algorithms such as Classification, Regression Tree (CART), Neural Network, Naive Bayes (NB), Decision Tree (DT) K-Nearest Neighbor. The performance of MCBIR systems using texture and shape features efficient. . The possible outcomes of a two class prediction be represented as True positive (TP), True negative (TN), False Positive (FP) and False Negative (FN).


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Samir S. Yadav ◽  
Shivajirao M. Jadhav

AbstractMedical image classification plays an essential role in clinical treatment and teaching tasks. However, the traditional method has reached its ceiling on performance. Moreover, by using them, much time and effort need to be spent on extracting and selecting classification features. The deep neural network is an emerging machine learning method that has proven its potential for different classification tasks. Notably, the convolutional neural network dominates with the best results on varying image classification tasks. However, medical image datasets are hard to collect because it needs a lot of professional expertise to label them. Therefore, this paper researches how to apply the convolutional neural network (CNN) based algorithm on a chest X-ray dataset to classify pneumonia. Three techniques are evaluated through experiments. These are linear support vector machine classifier with local rotation and orientation free features, transfer learning on two convolutional neural network models: Visual Geometry Group i.e., VGG16 and InceptionV3, and a capsule network training from scratch. Data augmentation is a data preprocessing method applied to all three methods. The results of the experiments show that data augmentation generally is an effective way for all three algorithms to improve performance. Also, Transfer learning is a more useful classification method on a small dataset compared to a support vector machine with oriented fast and rotated binary (ORB) robust independent elementary features and capsule network. In transfer learning, retraining specific features on a new target dataset is essential to improve performance. And, the second important factor is a proper network complexity that matches the scale of the dataset.


2021 ◽  
Vol 11 (3) ◽  
pp. 352
Author(s):  
Isselmou Abd El Kader ◽  
Guizhi Xu ◽  
Zhang Shuai ◽  
Sani Saminu ◽  
Imran Javaid ◽  
...  

The classification of brain tumors is a difficult task in the field of medical image analysis. Improving algorithms and machine learning technology helps radiologists to easily diagnose the tumor without surgical intervention. In recent years, deep learning techniques have made excellent progress in the field of medical image processing and analysis. However, there are many difficulties in classifying brain tumors using magnetic resonance imaging; first, the difficulty of brain structure and the intertwining of tissues in it; and secondly, the difficulty of classifying brain tumors due to the high density nature of the brain. We propose a differential deep convolutional neural network model (differential deep-CNN) to classify different types of brain tumor, including abnormal and normal magnetic resonance (MR) images. Using differential operators in the differential deep-CNN architecture, we derived the additional differential feature maps in the original CNN feature maps. The derivation process led to an improvement in the performance of the proposed approach in accordance with the results of the evaluation parameters used. The advantage of the differential deep-CNN model is an analysis of a pixel directional pattern of images using contrast calculations and its high ability to classify a large database of images with high accuracy and without technical problems. Therefore, the proposed approach gives an excellent overall performance. To test and train the performance of this model, we used a dataset consisting of 25,000 brain magnetic resonance imaging (MRI) images, which includes abnormal and normal images. The experimental results showed that the proposed model achieved an accuracy of 99.25%. This study demonstrates that the proposed differential deep-CNN model can be used to facilitate the automatic classification of brain tumors.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


2021 ◽  
Author(s):  
Wenjie Cao ◽  
Cheng Zhang ◽  
Zhenzhen Xiong ◽  
Ting Wang ◽  
Junchao Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document