fully convolutional neural networks
Recently Published Documents


TOTAL DOCUMENTS

180
(FIVE YEARS 94)

H-INDEX

22
(FIVE YEARS 10)

2021 ◽  
Vol 55 ◽  
pp. 44-53
Author(s):  
Misak Shoyan ◽  
◽  
Robert Hakobyan ◽  
Mekhak Shoyan ◽  

In this paper, we present deep learning-based blind image deblurring methods for estimating and removing a non-uniform motion blur from a single blurry image. We propose two fully convolutional neural networks (CNN) for solving the problem. The networks are trained end-to-end to reconstruct the latent sharp image directly from the given single blurry image without estimating and making any assumptions on the blur kernel, its uniformity, and noise. We demonstrate the performance of the proposed models and show that our approaches can effectively estimate and remove complex non-uniform motion blur from a single blurry image.


2021 ◽  
Vol 13 (23) ◽  
pp. 4941
Author(s):  
Rukhshanda Hussain ◽  
Yash Karbhari ◽  
Muhammad Fazal Ijaz ◽  
Marcin Woźniak ◽  
Pawan Kumar Singh ◽  
...  

Recently, deep learning-based methods, especially utilizing fully convolutional neural networks, have shown extraordinary performance in salient object detection. Despite its success, the clean boundary detection of the saliency objects is still a challenging task. Most of the contemporary methods focus on exclusive edge detection modules in order to avoid noisy boundaries. In this work, we propose leveraging on the extraction of finer semantic features from multiple encoding layers and attentively re-utilize it in the generation of the final segmentation result. The proposed Revise-Net model is divided into three parts: (a) the prediction module, (b) a residual enhancement module, and (c) reverse attention modules. Firstly, we generate the coarse saliency map through the prediction modules, which are fine-tuned in the enhancement module. Finally, multiple reverse attention modules at varying scales are cascaded between the two networks to guide the prediction module by employing the intermediate segmentation maps generated at each downsampling level of the REM. Our method efficiently classifies the boundary pixels using a combination of binary cross-entropy, similarity index, and intersection over union losses at the pixel, patch, and map levels, thereby effectively segmenting the saliency objects in an image. In comparison with several state-of-the-art frameworks, our proposed Revise-Net model outperforms them with a significant margin on three publicly available datasets, DUTS-TE, ECSSD, and HKU-IS, both on regional and boundary estimation measures.


2021 ◽  
Vol 11 (17) ◽  
pp. 7838
Author(s):  
Cheng-Wei Lei ◽  
Li Zhang ◽  
Tsung-Ming Tai ◽  
Chen-Chieh Tsai ◽  
Wen-Jyi Hwang ◽  
...  

This study aims to develop a novel automated computer vision algorithm for quality inspection of surfaces with complex patterns. The proposed algorithm is based on both an autoencoder (AE) and a fully convolutional neural network (FCN). The AE is adopted for the self-generation of templates from test targets for defect detection. Because the templates are produced from the test targets, the position alignment issues for the matching operations between templates and test targets can be alleviated. The FCN is employed for the segmentation of a template into a number of coherent regions. Because the AE has the limitation that its capacities for the regeneration of each coherent region in the template may be different, the segmentation of the template by FCN is beneficial for allowing the inspection of each region to be independently carried out. In this way, more accurate detection results can be achieved. Experimental results reveal that the proposed algorithm has the advantages of simplicity for training data collection, high accuracy for defect detection, and high flexibility for online inspection. The proposed algorithm is therefore an effective alternative for the automated inspection in smart factories with a growing demand for the reliability for high quality production.


Mathematics ◽  
2021 ◽  
Vol 9 (14) ◽  
pp. 1630
Author(s):  
Francisco García Riesgo ◽  
Sergio Luis Suárez Gómez ◽  
Enrique Díez Alonso ◽  
Carlos González-Gutiérrez ◽  
Jesús Daniel Santos

Information on the correlations from solar Shack–Hartmann wavefront sensors is usually used for reconstruction algorithms. However, modern applications of artificial neural networks as adaptive optics reconstruction algorithms allow the use of the full image as an input to the system intended for estimating a correction, avoiding approximations and a loss of information, and obtaining numerical values of those correlations. Although studied for night-time adaptive optics, the solar scenario implies more complexity due to the resolution of the solar images potentially taken. Fully convolutional neural networks were the technique chosen in this research to address this problem. In this work, wavefront phase recovery for adaptive optics correction is addressed, comparing networks that use images from the sensor or images from the correlations as inputs. As a result, this research shows improvements in performance for phase recovery with the image-to-phase approach. For recovering the turbulence of high-altitude layers, up to 93% similarity is reached.


Sign in / Sign up

Export Citation Format

Share Document