Object Detection based on Combination of Visible and Thermal Videos using A Joint Sample Consensus Background Model

2013 ◽  
Vol 8 (4) ◽  
Author(s):  
Guang Han ◽  
Xi Cai ◽  
Jinkuan Wang
2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Yizhong Yang ◽  
Qiang Zhang ◽  
Pengfei Wang ◽  
Xionglou Hu ◽  
Nengju Wu

Moving object detection in video streams is the first step of many computer vision applications. Background modeling and subtraction for moving detection is the most common technique for detecting, while how to detect moving objects correctly is still a challenge. Some methods initialize the background model at each pixel in the first N frames. However, it cannot perform well in dynamic background scenes since the background model only contains temporal features. Herein, a novel pixelwise and nonparametric moving object detection method is proposed, which contains both spatial and temporal features. The proposed method can accurately detect the dynamic background. Additionally, several new mechanisms are also proposed to maintain and update the background model. The experimental results based on image sequences in public datasets show that the proposed method provides the robustness and effectiveness in dynamic background scenes compared with the existing methods.


2017 ◽  
Vol 14 (8) ◽  
pp. 3672-3678 ◽  
Author(s):  
Mansoor Ahmed Khuhro ◽  
Dongjun Huang ◽  
Shaonian Huang ◽  
Papias Niyigena ◽  
Ammar Oad

2017 ◽  
Vol 12 (1) ◽  
pp. 86-94 ◽  
Author(s):  
Omar Elharrouss ◽  
Abdelghafour Abbad ◽  
Driss Moujahid ◽  
Hamid Tairi

Author(s):  
Raviraj Pandian ◽  
Ramya A.

Real-time moving object detection, classification, and tracking capabilities are presented with system operates on both color and gray-scale video imagery from a stationary camera. It can handle object detection in indoor and outdoor environments and under changing illumination conditions. Object detection in a video is usually performed by object detectors or background subtraction techniques. The proposed method determines the threshold automatically and dynamically depending on the intensities of the pixels in the current frame. In this method, it updates the background model with learning rate depending on the differences of the pixels in the background model of the previous frame. The graph cut segmentation-based region merging algorithm approaches achieve both segmentation and optical flow computation accurately and they can work in the presence of large camera motion. The algorithm makes use of the shape of the detected objects and temporal tracking results to successfully categorize objects into pre-defined classes like human, human group, and vehicle.


Sign in / Sign up

Export Citation Format

Share Document