Real-time camera motion classification for content-based indexing and retrieval using templates

Author(s):  
Sangkeun Lee ◽  
Hayes
2006 ◽  
Vol 03 (01) ◽  
pp. 61-67
Author(s):  
BYOUNG-JU YUN ◽  
JOONG-HOON CHO ◽  
JAE-WOO JEONG

Moving object tracking plays an important role in applications of object based video conference, video surveillance and so on. The computational complexity is very important in real-time object tracking. We assumed that the background scene is obtained before an object appears in the image and a camera moves after the object is detected. The proposed method can segment an object by using the difference image if there is no camera motion. After camera motion, it can track the object by using the backward BMA (block matching algorithm) with the HFM (human figure model). For real-time tracking, we used the ROI (region of interest) which is the tightest rectangle of the object. The simulation results show that the proposed method efficiently recognizes and tracks the moving camera as well as the fixed camera.


2020 ◽  
Author(s):  
Kichun Jo ◽  
Chansoo Kim ◽  
Sungjin Cho ◽  
Myoungho Sunwoo

2016 ◽  
Vol 10 (03) ◽  
pp. 299-322 ◽  
Author(s):  
Hongfei Cao ◽  
Yu Li ◽  
Carla M. Allen ◽  
Michael A. Phinney ◽  
Chi-Ren Shyu

Research has shown that visual information of multimedia is critical in highly-skilled applications, such as biomedicine and life sciences, and a certain visual reasoning process is essential for meaningful search in a timely manner. Relevant image characteristics are learned and verified with accumulated experiences during the reasoning processes. However, such processes are highly dynamic and elusive to computationally quantify and therefore challenging to analyze, let alone to make the knowledge sharable across users. In this paper we study real-time human visual reasoning processes with the aid of gaze tracking devices. Temporal and spatial representations are proposed for gaze modeling, and a visual reasoning retrieval system utilizing in-memory computing under Big Data ecosystems is designed for real-time search of similar reasoning models. Simulated data derived from human subject experiments show that the system has a reasonably high accuracy and provides predictive estimations for hardware requirements versus data sizes for exhaustive searches. Comparison between various visual action classifiers show challenges in modeling visual actions. The proposed system provides a theoretical framework and computing platform for advancement in visual semantic computing, as well as potential applications in medicine, social science, and arts.


Sign in / Sign up

Export Citation Format

Share Document