rain removal
Recently Published Documents


TOTAL DOCUMENTS

115
(FIVE YEARS 68)

H-INDEX

13
(FIVE YEARS 6)

2021 ◽  
Vol 65 (1) ◽  
Author(s):  
Hong Wang ◽  
Yichen Wu ◽  
Minghan Li ◽  
Qian Zhao ◽  
Deyu Meng
Keyword(s):  

2021 ◽  
Author(s):  
Yanyan Wei ◽  
Zhao Zhang ◽  
Mingliang Xu ◽  
Richang Hong ◽  
Jicong Fan ◽  
...  

<div>Synchronous Rain streaks and Raindrops Removal (SR3) is a very hard and challenging task, since rain streaks and raindrops are two wildly divergent real-scenario phenomena with different optical properties and mathematical distributions. As such, most of existing deep learning-based Singe Image Deraining (SID) methods only focus on one of them or the other. To solve this issue, we propose a new, robust and hybrid SID model, termed Robust Attention Deraining Network (RadNet) with strong robustenss and generalztion ability. The robustness of RadNet has two implications :(1) it can restore different degenerations, including raindrops, rain streaks, or both; (2) it can adapt to different data strategies, including single-type, superimposed-type and blended-type. Specifically, we first design a lightweight robust attention module (RAM) with a universal attention mechanism for coarse rain removal, and then present a new deep refining module (DRM) with multi-scales blocks for precise rain removal. The whole process is unified in a network to ensure sufficient robustness and strong generalization ability. We measure the performance of several SID methods on the SR3 task under a variety of data strategies, and extensive experiments demonstrate that our RadNet can outperform other state-of-the-art SID methods.</div>


2021 ◽  
Author(s):  
Yanyan Wei ◽  
Zhao Zhang ◽  
Mingliang Xu ◽  
Richang Hong ◽  
Jicong Fan ◽  
...  

<div>Synchronous Rain streaks and Raindrops Removal (SR3) is a very hard and challenging task, since rain streaks and raindrops are two wildly divergent real-scenario phenomena with different optical properties and mathematical distributions. As such, most of existing deep learning-based Singe Image Deraining (SID) methods only focus on one of them or the other. To solve this issue, we propose a new, robust and hybrid SID model, termed Robust Attention Deraining Network (RadNet) with strong robustenss and generalztion ability. The robustness of RadNet has two implications :(1) it can restore different degenerations, including raindrops, rain streaks, or both; (2) it can adapt to different data strategies, including single-type, superimposed-type and blended-type. Specifically, we first design a lightweight robust attention module (RAM) with a universal attention mechanism for coarse rain removal, and then present a new deep refining module (DRM) with multi-scales blocks for precise rain removal. The whole process is unified in a network to ensure sufficient robustness and strong generalization ability. We measure the performance of several SID methods on the SR3 task under a variety of data strategies, and extensive experiments demonstrate that our RadNet can outperform other state-of-the-art SID methods.</div>


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Kangying Wang ◽  
Minghui Wang

Rain will cause the occlusion and blur of background and target objects and affect the image visual effect and subsequent image analysis. Aiming at the problem of insufficient rain removal in the current rain removal algorithm, in order to improve the accuracy of computer vision algorithm in the process of rain removal, this paper proposes a multistage framework based on progressive restoration combined with recurrent neural network and feature complementarity technology to remove rain streak from single images. Firstly, the encoder-decoder subnetwork is adapted to learn multiscale information and extract richer rain features. Secondly, the original resolution image restored by decoder is used to preserve refined image details. Finally, we use the effective information of the previous stage to guide the rain removal of the next stage by the recurrent neural network. The final experimental results show that a multistage feature complementarity network performs well on both synthetic rainy data sets and real-world rainy data sets can remove rain more completely, preserve more background details, and achieve better visual effects compared with some popular single-image deraining methods.


Author(s):  
Bingcai Wei ◽  
Liye Zhang ◽  
Kangtao Wang ◽  
Qun Kong ◽  
Zhuang Wang

AbstractExtracting traffic information from images plays an increasingly significant role in Internet of vehicle. However, due to the high-speed movement and bumps of the vehicle, the image will be blurred during image acquisition. In addition, in rainy days, as a result of the rain attached to the lens, the target will be blocked by rain, and the image will be distorted. These problems have caused great obstacles for extracting key information from transportation images, which will affect the real-time judgment of vehicle control system on road conditions, and further cause decision-making errors of the system and even have a bearing on traffic accidents. In this paper, we propose a motion-blurred restoration and rain removal algorithm for IoV based on generative adversarial network and transfer learning. Dynamic scene deblurring and image de-raining are both among the challenging classical research directions in low-level vision tasks. For both tasks, firstly, instead of using ReLU in a conventional residual block, we designed a residual block containing three 256-channel convolutional layers, and we used the Leaky-ReLU activation function. Secondly, we used generative adversarial networks for the image deblurring task with our Resblocks, as well as the image de-raining task. Thirdly, experimental results on the synthetic blur dataset GOPRO and the real blur dataset RealBlur confirm the effectiveness of our model for image deblurring. Finally, as an image de-raining task based on transfer learning, we can fine-tune the pre-trained model with less training data and show good results on several datasets used for image rain removal.


2021 ◽  
Author(s):  
Cong Wang ◽  
Honghe Zhu ◽  
Wanshu Fan ◽  
Xiao-Ming Wu ◽  
Junyang Chen
Keyword(s):  

2021 ◽  
Author(s):  
Zhipeng Su ◽  
Yixiong Zhang ◽  
Xiao-Ping Zhang ◽  
Feng Qi
Keyword(s):  

2021 ◽  
pp. 179-186
Author(s):  
Junhua Shao ◽  
Qiang Li
Keyword(s):  

2021 ◽  
Vol 2035 (1) ◽  
pp. 012041
Author(s):  
Xiaomiao Pan ◽  
Yueting Yang ◽  
Chuansheng Yang ◽  
Chao Wang ◽  
Anhui Tan

Sign in / Sign up

Export Citation Format

Share Document