Moving Object Velocity Detection Based on Motion Blur on Photos Using Gray Level

Author(s):  
Julio Alfian Dwicahya ◽  
Nana Ramadijanti ◽  
Achmad Basuki
2020 ◽  
Vol 10 (21) ◽  
pp. 7941
Author(s):  
Dongyue Yang ◽  
Chen Chang ◽  
Guohua Wu ◽  
Bin Luo ◽  
Longfei Yin

Ghost imaging reconstructs the image based on the second-order correlation of the repeatedly measured light fields. When the observed object is moving, the consecutive sampling procedure leads to a motion blur in the reconstructed images. To overcome this defect, we propose a novel method of ghost imaging to obtain the motion information of moving object with a small number of measurements, in which the object could be regarded as relatively static. Our method exploits the idea of compressive sensing for a superior image reconstruction, combining with the low-order moments of the images to directly extract the motion information, which has the advantage of saving time and computation. With the gradual motion estimation and compensation during the imaging process, the experimental results show the proposed method could effectively overcome the motion blur, also possessing the advantage of reducing the necessary measurement number for each motion estimation and improving the reconstructed image quality.


2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Chia-Feng Chang ◽  
Jiunn-Lin Wu ◽  
Ting-Yu Tsai

One of the most common artifacts in digital photography is motion blur. When capturing an image under dim light by using a handheld camera, the tendency of the photographer’s hand to shake causes the image to blur. In response to this problem, image deblurring has become an active topic in computational photography and image processing in recent years. From the view of signal processing, image deblurring can be reduced to a deconvolution problem if the kernel function of the motion blur is assumed to be shift invariant. However, the kernel function is not always shift invariant in real cases; for example, in-plane rotation of a camera or a moving object can blur different parts of an image according to different kernel functions. An image that is degraded by multiple blur kernels is called a nonuniform blur image. In this paper, we propose a novel single image deblurring algorithm for nonuniform motion blur images that is blurred by moving object. First, a proposed uniform defocus map method is presented for measurement of the amounts and directions of motion blur. These blurred regions are then used to estimate point spread functions simultaneously. Finally, a fast deconvolution algorithm is used to restore the nonuniform blur image. We expect that the proposed method can achieve satisfactory deblurring of a single nonuniform blur image.


2010 ◽  
Vol 2010 ◽  
pp. 1-9 ◽  
Author(s):  
Kenta Goto ◽  
Katsunari Shibata

To develop a robot that behaves flexibly in the real world, it is essential that it learns various necessary functions autonomously without receiving significant information from a human in advance. Among such functions, this paper focuses on learning “prediction” that is attracting attention recently from the viewpoint of autonomous learning. The authors point out that it is important to acquire through learning not only the way of predicting future information, but also the purposive extraction of prediction target from sensor signals. It is suggested that through reinforcement learning using a recurrent neural network, both emerge purposively and simultaneously without testing individually whether or not each piece of information is predictable. In a task where an agent gets a reward when it catches a moving object that can possibly become invisible, it was observed that the agent learned to detect the necessary factors of the object velocity before it disappeared, to relay the information among some hidden neurons, and finally to catch the object at an appropriate position and timing, considering the effects of bounces off a wall after the object became invisible.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4394
Author(s):  
Dohae Lee ◽  
Young Jin Oh ◽  
In-Kwon Lee

We propose a deep neural network model that recognizes the position and velocity of a fast-moving object in a video sequence and predicts the object’s future motion. When filming a fast-moving subject using a regular camera rather than a super-high-speed camera, there is often severe motion blur, making it difficult to recognize the exact location and speed of the object in the video. Additionally, because the fast moving object usually moves rapidly out of the camera’s field of view, the number of captured frames used as input for future-motion predictions should be minimized. Our model can capture a short video sequence of two frames with a high-speed moving object as input, use motion blur as additional information to recognize the position and velocity of the object, and predict the video frame containing the future motion of the object. Experiments show that our model has significantly better performance than existing future-frame prediction models in determining the future position and velocity of an object in two physical scenarios where a fast-moving two-dimensional object appears.


2021 ◽  
Vol 11 (6) ◽  
pp. 2805
Author(s):  
Jie Gao ◽  
Yiping Cao ◽  
Jin Chen ◽  
Xiuzhang Huang

When the measured object is fast moving online, the captured deformed pattern may appear as motion blur, and some phase information will be lost. Therefore, the frame rate has to be improved by adjusting the image acquisition mode of the camera to adapt to a fast-moving object, but the resolution of the captured deformed pattern will be sacrificed. So a super-resolution image reconstruction method based on maximum a posteriori (MAP) estimation is adopted to obtain high-resolution deformed patterns, and in this way, the reconstructed high-resolution deformed patterns also have a good effect on noise suppression. Finally, all the reconstructed high-resolution equivalent phase shifting deformed patterns are used for online three-dimensional (3D) reconstruction. Experimental results prove the effectiveness of the proposed method. The proposed method has a good application prospect in high-precision and fast online 3D measurement.


2018 ◽  
Vol 32 (2) ◽  
pp. 243-248 ◽  
Author(s):  
Tantra Nath Jha

Motion blur is the result when the camera shutter remains open for an extended period of time and a relative motion between camera and object occurs. An approach for velocity detection based on motion blurred images has been implemented by the Radon transformation. The motion blur parameters are first estimated from the acquired images by using Radon transformation and then used to detect the speed of the moving object in the scene. Here established a link between the motion blur information of a 2D image and camera manufacturer’s data sheet and its calibration


Sign in / Sign up

Export Citation Format

Share Document