Texture-based medical image classification of computed tomography images using MRCSF

Author(s):  
R.S. Sabeenian ◽  
V. Palanisamy
2020 ◽  
Vol 10 (10) ◽  
pp. 3359 ◽  
Author(s):  
Ibrahem Kandel ◽  
Mauro Castelli

Accurate classification of medical images is of great importance for correct disease diagnosis. The automation of medical image classification is of great necessity because it can provide a second opinion or even a better classification in case of a shortage of experienced medical staff. Convolutional neural networks (CNN) were introduced to improve the image classification domain by eliminating the need to manually select which features to use to classify images. Training CNN from scratch requires very large annotated datasets that are scarce in the medical field. Transfer learning of CNN weights from another large non-medical dataset can help overcome the problem of medical image scarcity. Transfer learning consists of fine-tuning CNN layers to suit the new dataset. The main questions when using transfer learning are how deeply to fine-tune the network and what difference in generalization that will make. In this paper, all of the experiments were done on two histopathology datasets using three state-of-the-art architectures to systematically study the effect of block-wise fine-tuning of CNN. Results show that fine-tuning the entire network is not always the best option; especially for shallow networks, alternatively fine-tuning the top blocks can save both time and computational power and produce more robust classifiers.


2018 ◽  
Vol 14 (2) ◽  
Author(s):  
Sunil Sharma ◽  
Saumil Maheshwari ◽  
Anupam Shukla

Abstract Deep convolution neural networks (CNNs) have demonstrated their capabilities in modern-day medical image classification and analysis. The vital edge of deep CNN over other techniques is their ability to train without expert knowledge. Time bound detection is very beneficial for the early cure of disease. In this paper, a deep CNN architecture is proposed to classify nondiabetic retinopathy and diabetic retinopathy fundus eye images. Kaggle 2015 diabetic retinopathy competition dataset and messier experiment dataset are used in this study. The proposed deep CNN algorithm produces significant results with 93% area under the curve (AUC) for the Kaggle dataset and 91% AUC for the Messidor dataset. The sensitivity and specificity for the Kaggle dataset are 90.22% and 85.13%, respectively; the corresponding values of the Messidor dataset are 91.07% and 80.23%, respectively. The results outperformed many existing studies. The present architecture is a promising tool for diabetic retinopathy image classification.


Diagnostics ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 893
Author(s):  
Yazan Qiblawey ◽  
Anas Tahir ◽  
Muhammad E. H. Chowdhury ◽  
Amith Khandakar ◽  
Serkan Kiranyaz ◽  
...  

Detecting COVID-19 at an early stage is essential to reduce the mortality risk of the patients. In this study, a cascaded system is proposed to segment the lung, detect, localize, and quantify COVID-19 infections from computed tomography images. An extensive set of experiments were performed using Encoder–Decoder Convolutional Neural Networks (ED-CNNs), UNet, and Feature Pyramid Network (FPN), with different backbone (encoder) structures using the variants of DenseNet and ResNet. The conducted experiments for lung region segmentation showed a Dice Similarity Coefficient (DSC) of 97.19% and Intersection over Union (IoU) of 95.10% using U-Net model with the DenseNet 161 encoder. Furthermore, the proposed system achieved an elegant performance for COVID-19 infection segmentation with a DSC of 94.13% and IoU of 91.85% using the FPN with DenseNet201 encoder. The proposed system can reliably localize infections of various shapes and sizes, especially small infection regions, which are rarely considered in recent studies. Moreover, the proposed system achieved high COVID-19 detection performance with 99.64% sensitivity and 98.72% specificity. Finally, the system was able to discriminate between different severity levels of COVID-19 infection over a dataset of 1110 subjects with sensitivity values of 98.3%, 71.2%, 77.8%, and 100% for mild, moderate, severe, and critical, respectively.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


Sign in / Sign up

Export Citation Format

Share Document