Evaluation of visual information indexing and retrieval

Author(s):  
Georges Quénot ◽  
Philippe Joly ◽  
Jenny Benois-Pineau
2016 ◽  
Vol 10 (03) ◽  
pp. 299-322 ◽  
Author(s):  
Hongfei Cao ◽  
Yu Li ◽  
Carla M. Allen ◽  
Michael A. Phinney ◽  
Chi-Ren Shyu

Research has shown that visual information of multimedia is critical in highly-skilled applications, such as biomedicine and life sciences, and a certain visual reasoning process is essential for meaningful search in a timely manner. Relevant image characteristics are learned and verified with accumulated experiences during the reasoning processes. However, such processes are highly dynamic and elusive to computationally quantify and therefore challenging to analyze, let alone to make the knowledge sharable across users. In this paper we study real-time human visual reasoning processes with the aid of gaze tracking devices. Temporal and spatial representations are proposed for gaze modeling, and a visual reasoning retrieval system utilizing in-memory computing under Big Data ecosystems is designed for real-time search of similar reasoning models. Simulated data derived from human subject experiments show that the system has a reasonably high accuracy and provides predictive estimations for hardware requirements versus data sizes for exhaustive searches. Comparison between various visual action classifiers show challenges in modeling visual actions. The proposed system provides a theoretical framework and computing platform for advancement in visual semantic computing, as well as potential applications in medicine, social science, and arts.


Sign in / Sign up

Export Citation Format

Share Document