scholarly journals Identifying concepts from medical images via transfer learning and image retrieval

2019 ◽  
Vol 16 (4) ◽  
pp. 1978-1991 ◽  
Author(s):  
Xuwen Wang ◽  
◽  
Yu Zhang ◽  
Zhen Guo ◽  
Jiao Li
2020 ◽  
Vol 7 (4) ◽  
pp. 79-86
Author(s):  
Nagadevi Darapureddy ◽  
Nagaprakash Karatapu ◽  
Tirumala Krishna Battula

This paper examines a hybrid pattern i.e. Local derivative Vector pattern and comparasion of this pattern over other different patterns for content-based medical image retrieval. In recent years Pattern-based texture analysis has significant popularity for a variety of tasks like image recognition, image and texture classification, and object detection, etc. In literature, different patterns exist for texture analysis. This paper aims at forming a hybrid pattern compared in terms of precision, recall and F1-score with different patterns like Local Binary Pattern (LBP), Local Derivative Pattern (LDP), Completed Local Binary Pattern (CLBP), Local Tetra Pattern (LTrP), Local Vector Pattern (LVP) and Local Anisotropic Pattern (LAP) which were applied on medical images for image retrieval. The proposed method is evaluated on different modalities of medical images. The results of the proposed hybrid pattern show biased performance compared to the state-of-the-art. So this can further extended with other pattern to form a hybrid pattern.


2019 ◽  
Vol 8 (4) ◽  
pp. 462 ◽  
Author(s):  
Muhammad Owais ◽  
Muhammad Arsalan ◽  
Jiho Choi ◽  
Kang Ryoung Park

Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).


Author(s):  
Nikul Devis ◽  
Nirmal Joshy Pattara ◽  
Sherin Shoni ◽  
Shimin Mathew ◽  
Veena A. Kumar

2020 ◽  
Vol 8 (5) ◽  
pp. 4835-4841

Early detection of cancer is most important for long term survival of patient. Now a days CADx are widely used for early identification of breast cancer automatically. CAD uses significant features to identify and categorize cancer. CADx based on Convolutional Neural Network are becoming popular now a days due to extracting relevant features automatically. CNNs can be trained from scratch for medical images due to various input sizes and tumor structures. But due to limited amount of medical images available for training ,we have used transfer learning approach.We developed a deep learning framework based on CNN to discriminate the breast tumor either benign or malignant using transfer learning. We used digital mammographic images containing both views from CBIS-DDSM database. We have achived training(100%) and validation accuracy greater than 90% with minimum training and validation loss. We have also compared the reaults with transfer learning using pretrained network alexnet and googlenet on same dataset.


Sign in / Sign up

Export Citation Format

Share Document