scholarly journals Underwater Image Restoration Based on a Parallel Convolutional Neural Network

2019 ◽  
Vol 11 (13) ◽  
pp. 1591 ◽  
Author(s):  
Keyan Wang ◽  
Yan Hu ◽  
Jun Chen ◽  
Xianyun Wu ◽  
Xi Zhao ◽  
...  

Restoring degraded underwater images is a challenging ill-posed problem. The existing prior-based approaches have limited performance in many situations due to the reliance on handcrafted features. In this paper, we propose an effective convolutional neural network (CNN) for underwater image restoration. The proposed network consists of two paralleled branches: a transmission estimation network (T-network) and a global ambient light estimation network (A-network); in particular, the T-network employs cross-layer connection and multi-scale estimation to prevent halo artifacts and to preserve edge features. The estimates produced by these two branches are leveraged to restore the clear image according to the underwater optical imaging model. Moreover, we develop a new underwater image synthesizing method for building the training datasets, which can simulate images captured in various underwater environments. Experimental results based on synthetic and real images demonstrate that our restored underwater images exhibit more natural color correction and better visibility improvement against several state-of-the-art methods.

Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 595
Author(s):  
Huajun Song ◽  
Rui Wang

Aimed at the two problems of color deviation and poor visibility of the underwater image, this paper proposes an underwater image enhancement method based on the multi-scale fusion and global stretching of dual-model (MFGS), which does not rely on the underwater optical imaging model. The proposed method consists of three stages: Compared with other color correction algorithms, white-balancing can effectively eliminate the undesirable color deviation caused by medium attenuation, so it is selected to correct the color deviation in the first stage. Then, aimed at the problem of the poor performance of the saliency weight map in the traditional fusion processing, this paper proposed an updated strategy of saliency weight coefficient combining contrast and spatial cues to achieve high-quality fusion. Finally, by analyzing the characteristics of the results of the above steps, it is found that the brightness and clarity need to be further improved. The global stretching of the full channel in the red, green, blue (RGB) model is applied to enhance the color contrast, and the selective stretching of the L channel in the Commission International Eclairage-Lab (CIE-Lab) model is implemented to achieve a better de-hazing effect. Quantitative and qualitative assessments on the underwater image enhancement benchmark dataset (UIEBD) show that the enhanced images of the proposed approach achieve significant and sufficient improvements in color and visibility.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5137
Author(s):  
Elham Eslami ◽  
Hae-Bum Yun

Automated pavement distress recognition is a key step in smart infrastructure assessment. Advances in deep learning and computer vision have improved the automated recognition of pavement distresses in road surface images. This task remains challenging due to the high variation of defects in shapes and sizes, demanding a better incorporation of contextual information into deep networks. In this paper, we show that an attention-based multi-scale convolutional neural network (A+MCNN) improves the automated classification of common distress and non-distress objects in pavement images by (i) encoding contextual information through multi-scale input tiles and (ii) employing a mid-fusion approach with an attention module for heterogeneous image contexts from different input scales. A+MCNN is trained and tested with four distress classes (crack, crack seal, patch, pothole), five non-distress classes (joint, marker, manhole cover, curbing, shoulder), and two pavement classes (asphalt, concrete). A+MCNN is compared with four deep classifiers that are widely used in transportation applications and a generic CNN classifier (as the control model). The results show that A+MCNN consistently outperforms the baselines by 1∼26% on average in terms of the F-score. A comprehensive discussion is also presented regarding how these classifiers perform differently on different road objects, which has been rarely addressed in the existing literature.


2021 ◽  
Vol 13 (3) ◽  
pp. 335
Author(s):  
Yuhao Qing ◽  
Wenyi Liu

In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).


Sign in / Sign up

Export Citation Format

Share Document