scholarly journals Future-Frame Prediction for Fast-Moving Objects with Motion Blur

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4394
Author(s):  
Dohae Lee ◽  
Young Jin Oh ◽  
In-Kwon Lee

We propose a deep neural network model that recognizes the position and velocity of a fast-moving object in a video sequence and predicts the object’s future motion. When filming a fast-moving subject using a regular camera rather than a super-high-speed camera, there is often severe motion blur, making it difficult to recognize the exact location and speed of the object in the video. Additionally, because the fast moving object usually moves rapidly out of the camera’s field of view, the number of captured frames used as input for future-motion predictions should be minimized. Our model can capture a short video sequence of two frames with a high-speed moving object as input, use motion blur as additional information to recognize the position and velocity of the object, and predict the video frame containing the future motion of the object. Experiments show that our model has significantly better performance than existing future-frame prediction models in determining the future position and velocity of an object in two physical scenarios where a fast-moving two-dimensional object appears.

2014 ◽  
Vol 556-562 ◽  
pp. 3549-3552
Author(s):  
Lian Fen Huang ◽  
Qing Yue Chen ◽  
Jin Feng Lin ◽  
He Zhi Lin

The key of background subtraction which is widely used in moving object detecting is to set up and update the background model. This paper presents a block background subtraction method based on ViBe, using the spatial correlation and time continuity of the video sequence. Set up the video sequence background model firstly. Then, update the background model through block processing. Finally employ the difference between the current frame and background model to extract moving objects.


2015 ◽  
Vol 27 (4) ◽  
pp. 430-443 ◽  
Author(s):  
Jun Chen ◽  
◽  
Qingyi Gu ◽  
Tadayoshi Aoyama ◽  
Takeshi Takaki ◽  
...  

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/13.jpg"" width=""300"" /> Blink-spot projection method</div> We present a blink-spot projection method for observing moving three-dimensional (3D) scenes. The proposed method can reduce the synchronization errors of the sequential structured light illumination, which are caused by multiple light patterns projected with different timings when fast-moving objects are observed. In our method, a series of spot array patterns, whose spot sizes change at different timings corresponding to their identification (ID) number, is projected onto scenes to be measured by a high-speed projector. Based on simultaneous and robust frame-to-frame tracking of the projected spots using their ID numbers, the 3D shape of the measuring scene can be obtained without misalignments, even when there are fast movements in the camera view. We implemented our method with a high-frame-rate projector-camera system that can process 512 × 512 pixel images in real-time at 500 fps to track and recognize 16 × 16 spots in the images. Its effectiveness was demonstrated through several 3D shape measurements when the 3D module was mounted on a fast-moving six-degrees-of-freedom manipulator. </span>


2016 ◽  
Vol 9 (18) ◽  
pp. 63
Author(s):  
Jongsu Hwang

The naval combat management system conducts a comprehensive analysis of obtained information and integrates it with on-board sensors, weapons, and other equipment. Furthermore, it automatically performs all processes such as engagement plans, weapon assignments, and target detection using already pre-deployed database information. Thus, it is important to manage moving objects from many different kinds of sensors. In this paper, we will introduce the Moving-Object management method of combat systems using the In Moving Object DBMS. This is based on the main memory database technology provided by the high speed transaction processing performance. 


2010 ◽  
Vol 7 (4) ◽  
pp. 931-945 ◽  
Author(s):  
Ivana Nizetic ◽  
Kresimir Fertalj

Whereas research on moving objects is involved in a variety of different application areas, models and methods for movement prediction are often tailored to the specific type of moving objects. However, in most cases, prediction models are taking only historical location in consideration, while characteristics specific to certain type of moving objects are ignored. In this paper, we presented a conceptual model for movement prediction independent on an application area and data model of moving objects considering various object?s characteristics. Related work is critically evaluated, addressing advantages, possible problems and places for improvement. Generic model is proposed, based on an idea to encompass missing pieces in related work and to make the model as general as possible. Prediction process is illustrated on three case studies: prediction of the future location of vehicles, people and wild animals, in order to show their differences and to show how the process can be applied to all of them.


2021 ◽  
Vol 11 (6) ◽  
pp. 2805
Author(s):  
Jie Gao ◽  
Yiping Cao ◽  
Jin Chen ◽  
Xiuzhang Huang

When the measured object is fast moving online, the captured deformed pattern may appear as motion blur, and some phase information will be lost. Therefore, the frame rate has to be improved by adjusting the image acquisition mode of the camera to adapt to a fast-moving object, but the resolution of the captured deformed pattern will be sacrificed. So a super-resolution image reconstruction method based on maximum a posteriori (MAP) estimation is adopted to obtain high-resolution deformed patterns, and in this way, the reconstructed high-resolution deformed patterns also have a good effect on noise suppression. Finally, all the reconstructed high-resolution equivalent phase shifting deformed patterns are used for online three-dimensional (3D) reconstruction. Experimental results prove the effectiveness of the proposed method. The proposed method has a good application prospect in high-precision and fast online 3D measurement.


2015 ◽  
Vol 15 (7) ◽  
pp. 23-34
Author(s):  
Atanas Nikolov ◽  
Dimo Dimov

Abstract The current research concerns the problem of video stabilization “in a point”, which aims to stabilize all video frames according to one chosen reference frame to produce a new video, as by a static camera. Similar task importance relates providing static background in the video sequence that can be usable for correct measurements in the frames when studying dynamic objects in the video. For this aim we propose an efficient combined approach, called “3×3OF9×9”. It fuses our the previous development for fast and rigid 2D video stabilization [2] with the well-known Optical Flow approach, applied by parts via Otsu segmentation, for eliminating the influence of moving objects in the video. The obtained results are compared with those, produced by the commercial software Warp Stabilizer of Adobe-After-Effects CS6.


2021 ◽  
Author(s):  
wided oueslati ◽  
sonia tahri ◽  
hela limam ◽  
jalel akaichi

Abstract Nowadays, huge amounts of tracking data related to moving objects are being generated and collected in suitable repositories thanks to GPS devices, RFIDsensors, satellites and wireless communication technologies. Tracked moving objects could be pedestrians, cars, vessels, planes, animals, natural disasters. Those letters generate trajectory data that contain a great deal of knowledge. For this reason, these trajectory data sets need an urgent and an effective analysis process and constitute a rich source for inferring mobility patterns. Predicting the future position of a given moving object is one of the important tasks we can find in the knowledge discovery process. In fact, being able to predict a moving object’s future position related to natural phenomena, would allow decision makers to take strategic decisions in order to help the humanity, and prevent or avoid the propagation of natural catastrophes. The aim of this paper is to propose a new approach to predict the future position of a moving object, especially a moving region based on mobility patterns. To achieve this aim, we experiment our approach on a real case study related to hurricanes as moving regions. The proposed approach is composed of three phases. The first phase allows generating object mobility patterns. In the second phase, spatio-tempoal mobility rules are extracted from the generated patterns. In the third and last phase, hurricane future position prediction is accomplished by using the extracted rules.


Sign in / Sign up

Export Citation Format

Share Document