Context-Aware Medical Image Retrieval for Improved Dementia Diagnosis

Author(s):  
Melih Soydemir ◽  
Devrim Unay

Progress in medical imaging technology together with the increasing demand for confirming a diagnostic decision with objective, repeatable, and reliable measures for improved healthcare have multiplied the number of digital medical images that need to be processed, stored, managed, and searched. Comparison of multiple patients, their pathologies, and progresses by using image search systems may largely contribute to improved diagnosis and education of medical students and residents. Supporting image content information with contextual knowledge will lead to increased reliability, robustness, and accuracy in search results. To this end, the authors present an image search system that permits search by a multitude of image features (content), and demographics, patient’s medical history, clinical data, and ontologies (context). Moreover, they validate the system’s added value in dementia diagnosis via evaluations on publicly available image databases.

Author(s):  
Reinier H. van Leuken ◽  
Lluis Garcia ◽  
Ximena Olivares ◽  
Roelof van Zwol
Keyword(s):  

Agriculture ◽  
2020 ◽  
Vol 10 (10) ◽  
pp. 439 ◽  
Author(s):  
Helin Yin ◽  
Yeong Hyeon Gu ◽  
Chang-Jin Park ◽  
Jong-Han Park ◽  
Seong Joon Yoo

The use of conventional classification techniques to recognize diseases and pests can lead to an incorrect judgment on whether crops are diseased or not. Additionally, hot pepper diseases, such as “anthracnose” and “bacterial spot” can be erroneously judged, leading to incorrect disease recognition. To address these issues, multi-recognition methods, such as Google Cloud Vision, suggest multiple disease candidates and allow the user to make the final decision. Similarity-based image search techniques, along with multi-recognition, can also be used for this purpose. Content-based image retrieval techniques have been used in several conventional similarity-based image searches, using descriptors to extract features such as the image color and edge. In this study, we use eight pre-trained deep learning models (VGG16, VGG19, Resnet 50, etc.) to extract the deep features from images. We conducted experiments using 28,011 image data of 34 types of hot pepper diseases and pests. The search results for diseases and pests were similar to query images with deep features using the k-nearest neighbor method. In top-1 to top-5, when using the deep features based on the Resnet 50 model, we achieved recognition accuracies of approximately 88.38–93.88% for diseases and approximately 95.38–98.42% for pests. When using the deep features extracted from the VGG16 and VGG19 models, we recorded the second and third highest performances, respectively. In the top-10 results, when using the deep features extracted from the Resnet 50 model, we achieved accuracies of 85.6 and 93.62% for diseases and pests, respectively. As a result of performance comparison between the proposed method and the simple convolutional neural network (CNN) model, the proposed method recorded 8.62% higher accuracy in diseases and 14.86% higher in pests than the CNN classification model.


Sign in / Sign up

Export Citation Format

Share Document