Decision Fusion of Shape and Motion Information Based on Bayesian Framework for Moving Object Classification in Image Sequences

Author(s):  
Heungkyu Lee ◽  
JungHo Kim ◽  
June Kim
Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3722
Author(s):  
Byeongkeun Kang ◽  
Yeejin Lee

Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features.


2020 ◽  
Vol 10 (19) ◽  
pp. 6945
Author(s):  
Kin-Choong Yow ◽  
Insu Kim

Object localization is an important task in the visual surveillance of scenes, and it has important applications in locating personnel and/or equipment in large open spaces such as a farm or a mine. Traditionally, object localization can be performed using the technique of stereo vision: using two fixed cameras for a moving object, or using a single moving camera for a stationary object. This research addresses the problem of determining the location of a moving object using only a single moving camera, and it does not make use of any prior information on the type of object nor the size of the object. Our technique makes use of a single camera mounted on a quadrotor drone, which flies in a specific pattern relative to the object in order to remove the depth ambiguity associated with their relative motion. In our previous work, we showed that with three images, we can recover the location of an object moving parallel to the direction of motion of the camera. In this research, we find that with four images, we can recover the location of an object moving linearly in an arbitrary direction. We evaluated our algorithm on over 70 image sequences of objects moving in various directions, and the results showed a much smaller depth error rate (less than 8.0% typically) than other state-of-the-art algorithms.


2020 ◽  
Vol 10 (21) ◽  
pp. 7941
Author(s):  
Dongyue Yang ◽  
Chen Chang ◽  
Guohua Wu ◽  
Bin Luo ◽  
Longfei Yin

Ghost imaging reconstructs the image based on the second-order correlation of the repeatedly measured light fields. When the observed object is moving, the consecutive sampling procedure leads to a motion blur in the reconstructed images. To overcome this defect, we propose a novel method of ghost imaging to obtain the motion information of moving object with a small number of measurements, in which the object could be regarded as relatively static. Our method exploits the idea of compressive sensing for a superior image reconstruction, combining with the low-order moments of the images to directly extract the motion information, which has the advantage of saving time and computation. With the gradual motion estimation and compensation during the imaging process, the experimental results show the proposed method could effectively overcome the motion blur, also possessing the advantage of reducing the necessary measurement number for each motion estimation and improving the reconstructed image quality.


2006 ◽  
Vol 51 (3-4) ◽  
pp. 559-578 ◽  
Author(s):  
C. Bruni ◽  
D. Iacoviello ◽  
G. Koch ◽  
M. Lucchetti

Sign in / Sign up

Export Citation Format

Share Document