scholarly journals Semi-supervised learning for medical image classification using imbalanced training data

Author(s):  
Tri Huynh ◽  
Aiden Nibali ◽  
Zhen He
2021 ◽  
pp. 469-479
Author(s):  
Yangwen Hu ◽  
Zhehao Zhong ◽  
Ruixuan Wang ◽  
Hongmei Liu ◽  
Zhijun Tan ◽  
...  

2017 ◽  
Vol 8 (1) ◽  
pp. 18-30 ◽  
Author(s):  
Monali Y. Khachane

Computer-Aided Detection/Diagnosis (CAD) through artificial Intelligence is emerging ara in Medical Image processing and health care to make the expert systems more and more intelligent. The aim of this paper is to analyze the performance of different feature extraction techniques for medical image classification problem. Efforts are made to classify Brain MRI and Knee MRI medical images. Gray Level Co-occurrence Matrix (GLCM) based texture features, DWT and DCT transform features and Invariant Moments are used to classify the data. Experimental results shown that the proposed system produced better results however the training data is less than testing data. Support Vector Machine classifier with linear kernel produced higher accuracy 100% when used with texture features.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 764
Author(s):  
Zhiwen Huang ◽  
Quan Zhou ◽  
Xingxing Zhu ◽  
Xuming Zhang

In many medical image classification tasks, there is insufficient image data for deep convolutional neural networks (CNNs) to overcome the over-fitting problem. The light-weighted CNNs are easy to train but they usually have relatively poor classification performance. To improve the classification ability of light-weighted CNN models, we have proposed a novel batch similarity-based triplet loss to guide the CNNs to learn the weights. The proposed loss utilizes the similarity among multiple samples in the input batches to evaluate the distribution of training data. Reducing the proposed loss can increase the similarity among images of the same category and reduce the similarity among images of different categories. Besides this, it can be easily assembled into regular CNNs. To appreciate the performance of the proposed loss, some experiments have been done on chest X-ray images and skin rash images to compare it with several losses based on such popular light-weighted CNN models as EfficientNet, MobileNet, ShuffleNet and PeleeNet. The results demonstrate the applicability and effectiveness of our method in terms of classification accuracy, sensitivity and specificity.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


Sign in / Sign up

Export Citation Format

Share Document