Elastic collisions visualization using OpenCV object motion tracking

Author(s):  
M. Hajba ◽  
E. Cirikovic ◽  
M. Pecimotika
2016 ◽  
Author(s):  
Rafael Radkowski ◽  
David Wehr ◽  
Elizabeth Gregory ◽  
Stephen D. Holland

Author(s):  
Mingshao Zhang ◽  
Zhou Zhang ◽  
El-Sayed Aziz ◽  
Sven K. Esche ◽  
Constantin Chassapis

The Microsoft Kinect is part of a wave of new sensing technologies. Its RGB-D camera is capable of providing high quality synchronized video of both color and depth data. Compared to traditional 3-D tracking techniques that use two separate RGB cameras’ images to calculate depth data, the Kinect is able to produce more robust and reliable results in object recognition and motion tracking. Also, due to its low cost, the Kinect provides more opportunities for use in many areas compared to traditional more expensive 3-D scanners. In order to use the Kinect as a range sensor, algorithms must be designed to first recognize objects of interest and then track their motions. Although a large number of algorithms for both 2-D and 3-D object detection have been published, reliable and efficient algorithms for 3-D object motion tracking are rare, especially using Kinect as a range sensor. In this paper, algorithms for object recognition and tracking that can make use of both RGB and depth data in different scenarios are introduced. Subsequently, efficient methods for scene segmentation including background and noise filtering are discussed. Taking advantage of those two kinds of methods, a prototype system that is capable of working efficiently and stably in various applications related to educational laboratories is presented.


2015 ◽  
Vol 72 (5) ◽  
pp. 1225-1254 ◽  
Author(s):  
Chunmei Shi ◽  
Lingling Zhao ◽  
Junjie Wang ◽  
Chiping Zhang ◽  
Xiaohong Su ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4089
Author(s):  
Jingyeom Kim ◽  
Joohyung Lee ◽  
Taeyeon Kim

This paper presents a novel adaptive object movement and motion tracking (AdaMM) framework in a hierarchical edge computing system for achieving GPU memory footprint reduction of deep learning (DL)-based video surveillance services. DL-based object movement and motion tracking requires a significant amount of resources, such as (1) GPU processing power for the inference phase and (2) GPU memory for model loading. Despite the absence of an object in the video, if the DL model is loaded, the GPU memory must be kept allocated for the loaded model. Moreover, in several cases, video surveillance tries to capture events that rarely occur (e.g., abnormal object behaviors); therefore, such standby GPU memory might be easily wasted. To alleviate this problem, the proposed AdaMM framework categorizes the tasks used for the object movement and motion tracking procedure in an increasing order of the required processing and memory resources as task (1) frame difference calculation, task (2) object detection, and task (3) object motion and movement tracking. The proposed framework aims to adaptively release the unnecessary standby object motion and movement tracking model to save GPU memory by utilizing light tasks, such as frame difference calculation and object detection in a hierarchical manner. Consequently, object movement and motion tracking are adaptively triggered if the object is detected within the specified threshold time; otherwise, the GPU memory for the model of task (3) can be released. Moreover, object detection is also adaptively performed if the frame difference over time is greater than the specified threshold. We implemented the proposed AdaMM framework using commercial edge devices by considering a three-tier system, such as the 1st edge node for both tasks (1) and (2), the 2nd edge node for task (3), and the cloud for sending a push alarm. A measurement-based experiment reveals that the proposed framework achieves a maximum GPU memory reduction of 76.8% compared to the baseline system, while requiring a 2680 ms delay for loading the model for object movement and motion tracking.


Sign in / Sign up

Export Citation Format

Share Document