motion blur
Recently Published Documents


TOTAL DOCUMENTS

791
(FIVE YEARS 205)

H-INDEX

31
(FIVE YEARS 4)

Author(s):  
Xiaoqian Huang ◽  
Mohamad Halwani ◽  
Rajkumar Muthusamy ◽  
Abdulla Ayyad ◽  
Dewald Swart ◽  
...  

AbstractRobotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.


2022 ◽  
Vol 355 ◽  
pp. 03005
Author(s):  
Yunhong Wang ◽  
Dan Liu

Blind image deblurring is a long-standing challenging problem to improve the sharpness of an image as a prerequisite step. Many iterative methods are widely used for the deblurring image, but care must be taken to ensure that the methods have fast convergence and accuracy solutions. To address these problems, we propose a gradient-wise step size search strategy for iterative methods to achieve robustness and accelerate the deblurring process. We further modify the conjugate gradient method with the proposed strategy to solve the bling image deblurring problem. The gradient-wise step size aims to update gradient for each pixel individually, instead of updating step size by the fixed factor. The modified conjugate gradient method improves the convergence performance computation speed with a gradient-wise step size. Experimental results show that our method effectively estimates the sharp image for both motion blur images and defocused images. The results of synthetic datasets and natural images are better than what is achieved with other state-of-the-art blind image deblurring methods.


Author(s):  
Anchal Kumawat ◽  
Sucheta Panda

Often in practice, during the process of image acquisition, the acquired image gets degraded due to various factors like noise, motion blur, mis-focus of a camera, atmospheric turbulence, etc. resulting in the image unsuitable for further analysis or processing. To improve the quality of these degraded images, a double hybrid restoration filter is proposed on the two same sets of input images and the output images are fused to get a unified filter in combination with the concept of image fusion. First image set is processed by applying deconvolution using Wiener Filter (DWF) twice and decomposing the output image using Discrete Wavelet Transform (DWT). Similarly, second image set is also processed simultaneously by applying Deconvolution using Lucy–Richardson Filter (DLR) twice followed by the above procedure. The proposed filter gives a better performance as compared to DWF and DLR filters in case of both blurry as well as noisy images. The proposed filter is compared with some standard deconvolution algorithms and also some state-of-the-art restoration filters with the help of seven image quality assessment parameters. Simulation results prove the success of the proposed algorithm and at the same time, visual and quantitative results are very impressive.


Author(s):  
Sinh Huynh ◽  
Rajesh Krishna Balan ◽  
JeongGil Ko

Gaze tracking is a key building block used in many mobile applications including entertainment, personal productivity, accessibility, medical diagnosis, and visual attention monitoring. In this paper, we present iMon, an appearance-based gaze tracking system that is both designed for use on mobile phones and has significantly greater accuracy compared to prior state-of-the-art solutions. iMon achieves this by comprehensively considering the gaze estimation pipeline and then overcoming three different sources of errors. First, instead of assuming that the user's gaze is fixed to a single 2D coordinate, we construct each gaze label using a probabilistic 2D heatmap gaze representation input to overcome errors caused by microsaccade eye motions that cause the exact gaze point to be uncertain. Second, we design an image enhancement model to refine visual details and remove motion blur effects of input eye images. Finally, we apply a calibration scheme to correct for differences between the perceived and actual gaze points caused by individual Kappa angle differences. With all these improvements, iMon achieves a person-independent per-frame tracking error of 1.49 cm (on smartphones) and 1.94 cm (on tablets) when tested with the GazeCapture dataset and 2.01 cm with the TabletGaze dataset. This outperforms the previous state-of-the-art solutions by ~22% to 28%. By averaging multiple per-frame estimations that belong to the same fixation point and applying personal calibration, the tracking error is further reduced to 1.11 cm (smartphones) and 1.59 cm (tablets). Finally, we built implementations that run on an iPhone 12 Pro and show that our mobile implementation of iMon can run at up to 60 frames per second - thus making gaze-based control of applications possible.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yafeng Feng ◽  
Xianguo Liu

Video event detection and annotation work is an important content of video analysis and the basis of video content retrieval. Basketball is one of the most popular types of sports. Event detection and labeling of basketball videos can help viewers quickly locate events of interest and meet retrieval needs. This paper studies the application of anisotropic diffusion in video image smoothing, denoising, and enhancement. An improved form of anisotropic diffusion that can be used for video image enhancement is analyzed. This paper studies the anisotropic diffusion method for coherent speckle noise removal and proposes a video image denoising method that combines anisotropic diffusion and stationary wavelet transform. This paper proposes an anisotropic diffusion method based on visual characteristics, which adds a factor of video image detail while smoothing, and improves the visual effect of diffusion. This article discusses how to apply anisotropic diffusion methods and ideas to video image segmentation. We introduced the classic watershed segmentation algorithm and used forward-backward diffusion to process video images to reduce oversegmentation, introduced the active contour model and its improved GVF Snake, and analyzed the idea of how to use anisotropic diffusion and improve the GVF Snake model to get a new GGVF Snake model. In the study of basketball segmentation of close-up shots, we propose an improved Hough transform method based on a variable direction filter, which can effectively extract the center and radius of the basketball. The algorithm has good robustness to basketball partial occlusion and motion blur. In the basketball segmentation research of the perspective shot, the commonly used object segmentation method based on the change area detection is very sensitive to noise and requires the object not to move too fast. In order to correct the basketball segmentation deviation caused by the video noise and the fast basketball movement, we make corrections based on the peak characteristics of the edge gradient. At the same time, the internal and external energy calculation methods of the traditional active contour model are improved, and the judgment standard of the regional optimal solution and segmentation validity is further established. In the basketball tracking research, an improved block matching method is proposed. On the one hand, in order to overcome the influence of basketball’s own rotation, this article establishes a matching criterion that has nothing to do with the location of the area. On the other hand, this article improves the diamond motion search path based on the basketball’s motion correlation and center offset characteristics to reduce the number of searches and improve the tracking speed.


2021 ◽  
Vol 14 (1) ◽  
pp. 87
Author(s):  
Yeping Peng ◽  
Zhen Tang ◽  
Genping Zhao ◽  
Guangzhong Cao ◽  
Chao Wu

Unmanned air vehicle (UAV) based imaging has been an attractive technology to be used for wind turbine blades (WTBs) monitoring. In such applications, image motion blur is a challenging problem which means that motion deblurring is of great significance in the monitoring of running WTBs. However, an embarrassing fact for these applications is the lack of sufficient WTB images, which should include better pairs of sharp images and blurred images captured under the same conditions for network model training. To overcome the challenge of image pair acquisition, a training sample synthesis method is proposed. Sharp images of static WTBs were first captured, and then video sequences were prepared by running WTBs at different speeds. The blurred images were identified from the video sequences and matched to the sharp images using image difference. To expand the sample dataset, rotational motion blurs were simulated on different WTBs. Synthetic image pairs were then produced by fusing sharp images and images of simulated blurs. Finally, a total of 4000 image pairs were obtained. To conduct motion deblurring, a hybrid deblurring network integrated with DeblurGAN and DeblurGANv2 was deployed. The results show that the integration of DeblurGANv2 and Inception-ResNet-v2 provides better deblurred images, in terms of both metrics of signal-to-noise ratio (80.138) and structural similarity (0.950) than those obtained from the comparable networks of DeblurGAN and MobileNet-DeblurGANv2.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8481
Author(s):  
Khizer Mehmood ◽  
Ahmad Ali ◽  
Abdul Jalil ◽  
Baber Khan ◽  
Khalid Mehmood Cheema ◽  
...  

Visual object tracking (VOT) is a vital part of various domains of computer vision applications such as surveillance, unmanned aerial vehicles (UAV), and medical diagnostics. In recent years, substantial improvement has been made to solve various challenges of VOT techniques such as change of scale, occlusions, motion blur, and illumination variations. This paper proposes a tracking algorithm in a spatiotemporal context (STC) framework. To overcome the limitations of STC based on scale variation, a max-pooling-based scale scheme is incorporated by maximizing over posterior probability. To avert target model from drift, an efficient mechanism is proposed for occlusion handling. Occlusion is detected from average peak to correlation energy (APCE)-based mechanism of response map between consecutive frames. On successful occlusion detection, a fractional-gain Kalman filter is incorporated for handling the occlusion. An additional extension to the model includes APCE criteria to adapt the target model in motion blur and other factors. Extensive evaluation indicates that the proposed algorithm achieves significant results against various tracking methods.


2021 ◽  
Vol 55 ◽  
pp. 44-53
Author(s):  
Misak Shoyan ◽  
◽  
Robert Hakobyan ◽  
Mekhak Shoyan ◽  

In this paper, we present deep learning-based blind image deblurring methods for estimating and removing a non-uniform motion blur from a single blurry image. We propose two fully convolutional neural networks (CNN) for solving the problem. The networks are trained end-to-end to reconstruct the latent sharp image directly from the given single blurry image without estimating and making any assumptions on the blur kernel, its uniformity, and noise. We demonstrate the performance of the proposed models and show that our approaches can effectively estimate and remove complex non-uniform motion blur from a single blurry image.


2021 ◽  
Author(s):  
Ben J Hardcastle ◽  
Karin Bierig ◽  
Francisco JH Heras ◽  
Daniel A Schwyn ◽  
Kit D Longden ◽  
...  

Gaze stabilization reflexes reduce motion blur and simplify the processing of visual information by keeping the eyes level. These reflexes typically depend on estimates of the rotational motion of the body, head, and eyes, acquired by visual or mechanosensory systems. During rapid movements, there can be insufficient time for sensory feedback systems to estimate rotational motion, and additional mechanisms are required. The solutions to this common problem likely reflect an animal's behavioral repertoire. Here, we examine gaze stabilization in three families of dipteran flies, each with distinctly different flight behaviors. Through frequency response analysis based on tethered-flight experiments, we demonstrate that fast roll oscillations of the body lead to a stable gaze in hoverflies, whereas the reflex breaks down at the same speeds in blowflies and horseflies. Surprisingly, the high-speed gaze stabilization of hoverflies does not require sensory input from the halteres, their low-latency balance organs. Instead, we show how the behavior is explained by a hybrid control system that combines a sensory-driven, active stabilization component mediated by neck muscles, and a passive component which exploits physical properties of the animal's anatomy---the mass and inertia of the head. This solution requires hoverflies to have specializations of the head-neck joint that can be employed during flight. Our comparative study highlights how species-specific control strategies have evolved to support different visually-guided flight behaviors.


Sign in / Sign up

Export Citation Format

Share Document