Analyzing Dynamic Scenes Containing Multiple Moving Objects

Author(s):  
J. K. Aggarwal ◽  
W. N. Martin
2020 ◽  
Vol 39 (4) ◽  
pp. 5037-5044
Author(s):  
Peng-Cheng Wei ◽  
Fangcheng He ◽  
Jing Li

Nowadays, moving object detection in sequence images has become a hot topic in computer vision research, and has a very wide range of practical applications in many fields of military and daily life. In this paper, fast detection of moving objects in complex background is studied, and fast detection methods for moving objects in static and dynamic scenes are proposed respectively. Firstly, based on image preprocessing, aiming at the difficulty of feature extraction of moving targets in low illumination at night, Gamma change is used to process. Secondly, for the fast detection of moving objects in static scenes, this paper designs a detection method combining background difference and edge frame difference. Finally, aiming at the fast detection of moving objects in dynamic scenes, a feature matching detection method based on the SIFT algorithm is designed in this paper. Simulation experiments show that the method designed in this paper has good detection performance.


Author(s):  
Mourad Moussa ◽  
Maha Hmila ◽  
Ali Douik

Background subtraction methods are widely exploited for moving object detection in videos in many computer vision applications, such as traffic monitoring, human motion capture and video surveillance. The two most distinguishing and challenging aspects of such approaches in this application field are how to build correctly and efficiently the background model and how to prevent the false detection between; (1) moving background pixels and moving objects, (2) shadows pixel and moving objects. In this paper we present a new method for image segmentation using background subtraction. We propose an effective scheme for modelling and updating a background adaptively in dynamic scenes focus on statistical learning. We also introduce a method to detect sudden illumination changes and segment moving objects during these changes. Unlike the traditional color levels provided by RGB sensor aren’t the best choice, for this reason we propose a recursive algorithm that contributes to select very significant color space. Experimental results show significant improvements in moving object detection in dynamic scenes such as waving tree leaves and sudden illumination change, and it has a much lower computational cost compared to Gaussian mixture model.


2021 ◽  
Vol 11 (2) ◽  
pp. 645
Author(s):  
Xujie Kang ◽  
Jing Li ◽  
Xiangtao Fan ◽  
Hongdeng Jian ◽  
Chen Xu

Visual simultaneous localization and mapping (SLAM) is challenging in dynamic environments as moving objects can impair camera pose tracking and mapping. This paper introduces a method for robust dense bject-level SLAM in dynamic environments that takes a live stream of RGB-D frame data as input, detects moving objects, and segments the scene into different objects while simultaneously tracking and reconstructing their 3D structures. This approach provides a new method of dynamic object detection, which integrates prior knowledge of the object model database constructed, object-oriented 3D tracking against the camera pose, and the association between the instance segmentation results on the current frame data and an object database to find dynamic objects in the current frame. By leveraging the 3D static model for frame-to-model alignment, as well as dynamic object culling, the camera motion estimation reduced the overall drift. According to the camera pose accuracy and instance segmentation results, an object-level semantic map representation was constructed for the world map. The experimental results obtained using the TUM RGB-D dataset, which compares the proposed method to the related state-of-the-art approaches, demonstrating that our method achieves similar performance in static scenes and improved accuracy and robustness in dynamic scenes.


Author(s):  
Zoubaida Mejri ◽  
Lilia Sidhom ◽  
Afef Abdelkrim

In this paper, a Nonlinear Unknown Input Observer (NLUIO) based approach is proposed for three-dimensional (3-D) structure from motion identification. Unlike the previous studies that require prior knowledge of either the motion parameters or scene geometry, the proposed approach assumes that the object motion is imperfectly known and considered as an unknown input to the perspective dynamical system. The reconstruction of the 3-D structure of the moving objects can be achieved using just two-dimensional (2-D) images of a monocular vision system. The proposed scheme is illustrated with a numerical example in the presence of measurement noise for both static and dynamic scenes. Those results are used to clearly demonstrate the advantages of the proposed NLUIO.


Sign in / Sign up

Export Citation Format

Share Document