APPLICATION OF ENTROPIES FOR AUTOMATED DIAGNOSIS OF ABNORMALITIES IN ULTRASOUND IMAGES: A REVIEW

2017 ◽  
Vol 17 (07) ◽  
pp. 1740012 ◽  
Author(s):  
YUKI HAGIWARA ◽  
VIDYA K SUDARSHAN ◽  
SOOK SAM LEONG ◽  
ANUSHYA VIJAYNANTHAN ◽  
KWAN HOONG NG

Automation of diagnosis process in medical imaging using various computer-aided techniques is a leading topic of research. Among many computer-aided methods, nonlinear entropies are widely applied in the development of automated algorithms to diagnose abnormalities present in medical images. The use of entropy features in development of Computer-Aided Diagnosis (CAD) may enhance the accuracy of the system. Entropy features depict the nonlinearity of images and thereby the presence of complexity in the images. Various types of entropies have been employed in medical image analysis for automated diagnosis of abnormalities present in the images. This paper focuses on the diverse types of entropies employed in the development of CAD systems for the diagnosis of abnormalities in the medical images. In addition to the diagnosis, these entropies can be used to differentiate the images based on the severity of the abnormalities and for other biomedical applications.

2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Guangyuan Zheng ◽  
Guanghui Han ◽  
Nouman Q. Soomro ◽  
Linjuan Ma ◽  
Fuquan Zhang ◽  
...  

Purpose. Computer-aided diagnosis (CAD) can aid in improving diagnostic level; however, the main problem currently faced by CAD is that it cannot obtain sufficient labeled samples. To solve this problem, in this study, we adopt a generative adversarial network (GAN) approach and design a semisupervised learning algorithm, named G2C-CAD. Methods. From the National Cancer Institute (NCI) Lung Image Database Consortium (LIDC) dataset, we extracted four types of pulmonary nodule sign images closely related to lung cancer: noncentral calcification, lobulation, spiculation, and nonsolid/ground-glass opacity (GGO) texture, obtaining a total of 3,196 samples. In addition, we randomly selected 2,000 non-lesion image blocks as negative samples. We split the data 90% for training and 10% for testing. We designed a DCGAN generative adversarial framework and trained it on the small sample set. We also trained our designed CNN-based fuzzy Co-forest on the labeled small sample set and obtained a preliminary classifier. Then, coupled with the simulated unlabeled samples generated by the trained DCGAN, we conducted iterative semisupervised learning, which continually improved the classification performance of the fuzzy Co-forest until the termination condition was reached. Finally, we tested the fuzzy Co-forest and compared its performance with that of a C4.5 random decision forest and the G2C-CAD system without the fuzzy scheme, using ROC and confusion matrix for evaluation. Results. Four different types of lung cancer-related signs were used in the classification experiment: noncentral calcification, lobulation, spiculation, and nonsolid/ground-glass opacity (GGO) texture, along with negative image samples. For these five classes, the G2C-CAD system obtained AUCs of 0.946, 0.912, 0.908, 0.887, and 0.939, respectively. The average accuracy of G2C-CAD exceeded that of the C4.5 random decision tree by 14%. G2C-CAD also obtained promising test results on the LISS signs dataset; its AUCs for GGO, lobulation, spiculation, pleural indentation, and negative image samples were 0.972, 0.964, 0.941, 0.967, and 0.953, respectively. Conclusion. The experimental results show that G2C-CAD is an appropriate method for addressing the problem of insufficient labeled samples in the medical image analysis field. Moreover, our system can be used to establish a training sample library for CAD classification diagnosis, which is important for future medical image analysis.


2011 ◽  
Vol 26 (5) ◽  
pp. 1485-1489 ◽  
Author(s):  
Keisuke Kubota ◽  
Junko Kuroda ◽  
Masashi Yoshida ◽  
Keiichiro Ohta ◽  
Masaki Kitajima

2018 ◽  
Vol 7 (4.10) ◽  
pp. 685
Author(s):  
Nageswari P ◽  
Rajan S ◽  
Manivel K

Medical ultrasound imaging plays an important role in diagnosis of various complicated disorders. But, these ultrasound images are intrinsically degraded with speckle noise which harshly affects the image visual qualities and essential particulars. Hence, denoising is an unavoidable process in medical image processing.  In this paper, a new despeckling technique is presented for denoising the medical ultrasound images by employing fuzzy technique on co-efficient of variation and fractional order integration filter. The proposed technique has two steps. During first step, the noisy image pixels are classified into three regions by using fuzzy technique on co-efficient of variation and consequently, the proposed technique adaptively employs appropriate filters on the grouped pixels to reduce noise in the ultrasound image. In the second step, to obtain an effective denoising image, the fractional order integration filter is applied on the resulting image of step 1. The performance of the proposed technique is tested on various medical images in terms of Peak signal to noise ratio and speckle suppression index quality measures. Experimental results reveal that the proposed despeckling technique can efficiently reduce the speckle noise, protect the edges and preserves any other important structural details of an image. It is suggested that the proposed technique is employed as a preprocessing tool for medical image analysis and diagnosis. 


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


2020 ◽  
Author(s):  
Yang Liu ◽  
Lu Meng ◽  
Jianping Zhong

Abstract Background: For deep learning, the size of the dataset greatly affects the final training effect. However, in the field of computer-aided diagnosis, medical image datasets are often limited and even scarce.Methods: We aim to synthesize medical images and enlarge the size of the medical image dataset. In the present study, we synthesized the liver CT images with a tumor based on the mask attention generative adversarial network (MAGAN). We masked the pixels of the liver tumor in the image as the attention map. And both the original image and attention map were loaded into the generator network to obtain the synthesized images. Then the original images, the attention map, and the synthesized images were all loaded into the discriminator network to determine if the synthesized images were real or fake. Finally, we can use the generator network to synthesize liver CT images with a tumor.Results: The experiments showed that our method outperformed the other state-of-the-art methods, and can achieve a mean peak signal-to-noise ratio (PSNR) as 64.72dB.Conclusions: All these results indicated that our method can synthesize liver CT images with tumor, and build large medical image dataset, which may facilitate the progress of medical image analysis and computer-aided diagnosis.


Sign in / Sign up

Export Citation Format

Share Document