background compensation
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 8)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
Alvaro Fernandez Bocco ◽  
Fredy Solis ◽  
Benjamin T. Reyes ◽  
Damian A. Morero ◽  
Mario R. Hueda

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Agustin C. Galetto ◽  
Benjamin T. Reyes ◽  
Damian A. Morero ◽  
Mario R. Hueda

Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 1965
Author(s):  
Juncai Zhu ◽  
Zhizhong Wang ◽  
Songwei Wang ◽  
Shuli Chen

Detecting moving objects in a video sequence is an important problem in many vision-based applications. In particular, detecting moving objects when the camera is moving is a difficult problem. In this study, we propose a symmetric method for detecting moving objects in the presence of a dynamic background. First, a background compensation method is used to detect the proposed region of motion. Next, in order to accurately locate the moving objects, we propose a convolutional neural network-based method called YOLOv3-SOD for detecting all objects in the image, which is lightweight and specifically designed for small objects. Finally, the moving objects are determined by fusing the results obtained by motion detection and object detection. Missed detections are recalled according to the temporal and spatial information in adjacent frames. A dataset is not currently available specifically for moving object detection and recognition, and thus, we have released the MDR105 dataset comprising three classes with 105 videos. Our experiments demonstrated that the proposed algorithm can accurately detect moving objects in various scenarios with good overall performance.


2020 ◽  
Vol 58 (10) ◽  
pp. 7010-7021 ◽  
Author(s):  
Yunming Wang ◽  
Taoyang Wang ◽  
Guo Zhang ◽  
Qian Cheng ◽  
Jia-qi Wu

Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2668 ◽  
Author(s):  
Gianni Allebosch ◽  
David Van Hamme ◽  
Peter Veelaert ◽  
Wilfried Philips

In this paper, we describe a robust method for compensating the panning and tilting motion of a camera, applied to foreground–background segmentation. First, the necessary internal camera parameters are determined through feature-point extraction and tracking. From these parameters, two motion models for points in the image plane are established. The first model assumes a fixed tilt angle, whereas the second model allows simultaneous pan and tilt. At runtime, these models are used to compensate for the motion of the camera in the background model. We will show that these methods provide a robust compensation mechanism and improve the foreground masks of an otherwise state-of-the-art unsupervised foreground–background segmentation method. The resulting algorithm is always able to obtain F 1 scores above 80 % on every daytime video in our test set when a minimal number of only eight feature matches are used to determine the background compensation, whereas the standard approaches need significantly more feature matches to produce similar results.


Sign in / Sign up

Export Citation Format

Share Document