A fast motion deblurring based on the motion blur region search for a mobile phone

Author(s):  
Nam-Joon Kim ◽  
Sungjoo Suh ◽  
Changkyu Choi ◽  
Dusik Park ◽  
Changyeong Kim
2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Eunsung Lee ◽  
Eunjung Chae ◽  
Hejin Cheong ◽  
Joonki Paik

This paper presents an image deblurring algorithm to remove motion blur using analysis of motion trajectories and local statistics based on inertial sensors. The proposed method estimates a point-spread-function (PSF) of motion blur by accumulating reweighted projections of the trajectory. A motion blurred image is then adaptively restored using the estimated PSF and spatially varying activity map to reduce both restoration artifacts and noise amplification. Experimental results demonstrate that the proposed method outperforms existing PSF estimation-based motion deconvolution methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed in various imaging devices because of its efficient implementation without an iterative computational structure.


2020 ◽  
Vol 10 (6) ◽  
pp. 2151
Author(s):  
Wenbin Wang ◽  
Chao Liu ◽  
Bo Xu ◽  
Long Li ◽  
Wei Chen ◽  
...  

Visual object trackers based on correlation filters have recently demonstrated substantial robustness to challenging conditions with variations in illumination and motion blur. Nonetheless, the models depend strongly on the spatial layout and are highly sensitive to deformation, scale, and occlusion. As presented and discussed in this paper, the colour attributes are combined due to their complementary characteristics to handle variations in shape well. In addition, a novel approach for robust scale estimation is proposed for mitigatinge the problems caused by fast motion and scale variations. Moreover, feedback from high-confidence tracking results was also utilized to prevent model corruption. The evaluation results for our tracker demonstrate that it performed outstandingly in terms of both precision and accuracy with enhancements of approximately 25% and 49%, respectively, in authoritative benchmarks compared to those for other popular correlation- filter-based trackers. Finally, the proposed tracker has demonstrated strong robustness, which has enabled online object tracking under various scenarios at a real-time frame rate of approximately 65 frames per second (FPS).


2021 ◽  
Vol 55 ◽  
pp. 44-53
Author(s):  
Misak Shoyan ◽  
◽  
Robert Hakobyan ◽  
Mekhak Shoyan ◽  

In this paper, we present deep learning-based blind image deblurring methods for estimating and removing a non-uniform motion blur from a single blurry image. We propose two fully convolutional neural networks (CNN) for solving the problem. The networks are trained end-to-end to reconstruct the latent sharp image directly from the given single blurry image without estimating and making any assumptions on the blur kernel, its uniformity, and noise. We demonstrate the performance of the proposed models and show that our approaches can effectively estimate and remove complex non-uniform motion blur from a single blurry image.


Sensors ◽  
2016 ◽  
Vol 16 (9) ◽  
pp. 1443 ◽  
Author(s):  
Lingyun Xu ◽  
Haibo Luo ◽  
Bin Hui ◽  
Zheng Chang

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4030
Author(s):  
Wenhua Guo ◽  
Jiabao Gao ◽  
Yanbin Tian ◽  
Fan Yu ◽  
Zuren Feng

Object tracking is one of the most challenging problems in the field of computer vision. In challenging object tracking scenarios such as illumination variation, occlusion, motion blur and fast motion, existing algorithms can present decreased performances. To make better use of the various features of the image, we propose an object tracking method based on the self-adaptive feature selection (SAFS) algorithm, which can select the most distinguishable feature sub-template to guide the tracking task. The similarity of each feature sub-template can be calculated by the histogram of the features. Then, the distinguishability of the feature sub-template can be measured by their similarity matrix based on the maximum a posteriori (MAP). The selection task of the feature sub-template is transformed into the classification task between feature vectors by the above process and adopt modified Jeffreys’ entropy as the discriminant metric for classification, which can complete the update of the sub-template. Experiments with the eight video sequences in the Visual Tracker Benchmark dataset evaluate the comprehensive performance of SAFS and compare them with five baselines. Experimental results demonstrate that SAFS can overcome the difficulties caused by scene changes and achieve robust object tracking.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Jiangfan Feng ◽  
Shuang Qi

Motion deblurring and image enhancement are active research areas over the years. Although the CNN-based model has an advanced state of the art in motion deblurring and image enhancement, it fails to produce multitask results when challenged with the images of challenging illumination conditions. The key idea of this paper is to introduce a novel multitask learning algorithm for image motion deblurring and color enhancement, which enables us to enhance the color effect of an image while eliminating motion blur. To achieve this, we explore the synchronization of processing two tasks for the first time by using the framework of generative adversarial networks (GANs). We add L1 loss to the generator loss to simulate the model to match the target image at the pixel level. To make the generated image closer to the target image at the visual level, we also integrate perceptual style loss into generator loss. After a lot of experiments, we get an effective configuration scheme. The best model trained for about one week has achieved state-of-the-art performance in both deblurring and enhancement. Also, its image processing speed is approximately 1.75 times faster than the best competitor.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 121
Author(s):  
Buğra ŞİMŞEK ◽  
Hasan Şakir BİLGE

Localization and mapping technologies are of great importance for all varieties of Unmanned Aerial Vehicles (UAVs) to perform their operations. In the near future, it is planned to increase the use of micro/nano-size UAVs. Such vehicles are sometimes expendable platforms, and reuse may not be possible. Compact, mounted and low-cost cameras are preferred in these UAVs due to weight, cost and size limitations. Visual simultaneous localization and mapping (vSLAM) methods are used for providing situational awareness of micro/nano-size UAVs. Fast rotational movements that occur during flight with gimbal-free, mounted cameras cause motion blur. Above a certain level of motion blur, tracking losses exist, which causes vSLAM algorithms not to operate effectively. In this study, a novel vSLAM framework is proposed that prevents the occurrence of tracking losses in micro/nano-UAVs due to the motion blur. In the proposed framework, the blur level of the frames obtained from the platform camera is determined and the frames whose focus measure score is below the threshold are restored by specific motion-deblurring methods. The major reasons of tracking losses have been analyzed with experimental studies, and vSLAM algorithms have been made durable by our studied framework. It has been observed that our framework can prevent tracking losses at 5, 10 and 20 fps processing speeds. vSLAM algorithms continue to normal operations at those processing speeds that have not been succeeded before using standard vSLAM algorithms, which can be considered as a superiority of our study.


2020 ◽  
Vol 34 (07) ◽  
pp. 11882-11889 ◽  
Author(s):  
Kuldeep Purohit ◽  
A. N. Rajagopalan

In this paper, we address the problem of dynamic scene deblurring in the presence of motion blur. Restoration of images affected by severe blur necessitates a network design with a large receptive field, which existing networks attempt to achieve through simple increment in the number of generic convolution layers, kernel-size, or the scales at which the image is processed. However, these techniques ignore the non-uniform nature of blur, and they come at the expense of an increase in model size and inference time. We present a new architecture composed of region adaptive dense deformable modules that implicitly discover the spatially varying shifts responsible for non-uniform blur in the input image and learn to modulate the filters. This capability is complemented by a self-attentive module which captures non-local spatial relationships among the intermediate features and enhances the spatially varying processing capability. We incorporate these modules into a densely connected encoder-decoder design which utilizes pre-trained Densenet filters to further improve the performance. Our network facilitates interpretable modeling of the spatially-varying deblurring process while dispensing with multi-scale processing and large filters entirely. Extensive comparisons with prior art on benchmark dynamic scene deblurring datasets clearly demonstrate the superiority of the proposed networks via significant improvements in accuracy and speed, enabling almost real-time deblurring.


Sign in / Sign up

Export Citation Format

Share Document