scholarly journals Direction-aware Feature-level Frequency Decomposition for Single Image Deraining

Author(s):  
Sen Deng ◽  
Yidan Feng ◽  
Mingqiang Wei ◽  
Haoran Xie ◽  
Yiping Chen ◽  
...  

We present a novel direction-aware feature-level frequency decomposition network for single image deraining. Compared with existing solutions, the proposed network has three compelling characteristics. First, unlike previous algorithms, we propose to perform frequency decomposition at feature-level instead of image-level, allowing both low-frequency maps containing structures and high-frequency maps containing details to be continuously refined during the training procedure. Second, we further establish communication channels between low-frequency maps and high-frequency maps to interactively capture structures from high-frequency maps and add them back to low-frequency maps and, simultaneously, extract details from low-frequency maps and send them back to high-frequency maps, thereby removing rain streaks while preserving more delicate features in the input image. Third, different from existing algorithms using convolutional filters consistent in all directions, we propose a direction-aware filter to capture the direction of rain streaks in order to more effectively and thoroughly purge the input images of rain streaks. We extensively evaluate the proposed approach in three representative datasets and experimental results corroborate our approach consistently outperforms state-of-the-art deraining algorithms.

Atmosphere ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 1266
Author(s):  
Jing Qin ◽  
Liang Chen ◽  
Jian Xu ◽  
Wenqi Ren

In this paper, we propose a novel method to remove haze from a single hazy input image based on the sparse representation. In our method, the sparse representation is proposed to be used as a contextual regularization tool, which can reduce the block artifacts and halos produced by only using dark channel prior without soft matting as the transmission is not always constant in a local patch. A novel way to use dictionary is proposed to smooth an image and generate the sharp dehazed result. Experimental results demonstrate that our proposed method performs favorably against the state-of-the-art dehazing methods and produces high-quality dehazed and vivid color results.


2011 ◽  
Vol 204-210 ◽  
pp. 1419-1422 ◽  
Author(s):  
Yong Yang

Image fusion is to combine several different source images to form a new image by using a certain method. Recent studies show that among a variety of image fusion algorithms, the wavelet-based method is more effective. In the wavelet-based method, the key technique is the fusion scheme, which can decide the final fused result. This paper presents a novel fusion scheme that integrates the wavelet decomposed coefficients in a quite separate way when fusing images. The method is formed by considering the different physical meanings of the coefficients in both the low frequency and high frequency bands. The fused results were compared with several existing fusion methods and evaluated by three measures of performance. The experimental results can demonstrate that the proposed method can achieve better performance than conventional image fusion methods.


Author(s):  
Madhu Vankadari ◽  
Swagat Kumar ◽  
Anima Majumder ◽  
Kaushik Das

This paper presents a new GAN-based deep learning framework for estimating absolute scale awaredepth and ego motion from monocular images using a completely unsupervised mode of learning.The proposed architecture uses two separate generators to learn the distribution of depth and posedata for a given input image sequence. The depth and pose data, thus generated, are then evaluated bya patch-based discriminator using the reconstructed image and its corresponding actual image. Thepatch-based GAN (or PatchGAN) is shown to detect high frequency local structural defects in thereconstructed image, thereby improving the accuracy of overall depth and pose estimation. Unlikeconventional GANs, the proposed architecture uses a conditioned version of input and output of thegenerator for training the whole network. The resulting framework is shown to outperform all existing deep networks in this field and beating the current state-of-the-art method by 8.7% in absoluteerror and 5.2% in RMSE metric. To the best of our knowledge, this is first deep network based modelto estimate both depth and pose simultaneously using a conditional patch-based GAN paradigm.The efficacy of the proposed approach is demonstrated through rigorous ablation studies and exhaustive performance comparison on the popular KITTI outdoor driving dataset.


Author(s):  
Yash Sharma ◽  
Gavin Weiguang Ding ◽  
Marcus A. Brubaker

Carefully crafted, often imperceptible, adversarial perturbations have been shown to cause state-of-the-art models to yield extremely inaccurate outputs, rendering them unsuitable for safety-critical application domains. In addition, recent work has shown that constraining the attack space to a low frequency regime is particularly effective. Yet, it remains unclear whether this is due to generally constraining the attack search space or specifically removing high frequency components from consideration. By systematically controlling the frequency components of the perturbation, evaluating against the top-placing defense submissions in the NeurIPS 2017 competition, we empirically show that performance improvements in both the white-box and black-box transfer settings are yielded only when low frequency components are preserved. In fact, the defended models based on adversarial training are roughly as vulnerable to low frequency perturbations as undefended models, suggesting that the purported robustness of state-of-the-art ImageNet defenses is reliant upon adversarial perturbations being high frequency in nature. We do find that under L-inf-norm constraint 16/255, the competition distortion bound, low frequency perturbations are indeed perceptible. This questions the use of the L-inf-norm, in particular, as a distortion metric, and, in turn, suggests that explicitly considering the frequency space is promising for learning robust models which better align with human perception.


2013 ◽  
Vol 373-375 ◽  
pp. 530-535 ◽  
Author(s):  
Chuan Zhu Liao ◽  
Yu Shu Liu ◽  
Ming Yan Jiang

In order to get an image with every object in focus, an image fusion process is required to fuse the images under different focal settings. In this paper, a new multifocus image fusion algorithm is proposed. The algorithm is based on Laplacian pyramid and Gabor filters. The source images are decomposed by Laplacian pyramid, then the directional edges feature and detail information can be obtained by Gabor filters. Different fusion rules are applied to the low frequency and high frequency coefficients. The experimental results show that the algorithm is simple and effective.


2018 ◽  
Vol 32 (34n36) ◽  
pp. 1840086 ◽  
Author(s):  
Ruxi Xiang ◽  
Feng Wu

In this paper, we propose a novel and effective method for removing haze based on a single image, which firstly computes the dark channel of the estimated radiance image by decomposing the dark channel of the haze input image, and the method then estimates the transmission map of the input image. Finally, the scene radiance image is restored by the classical atmospheric scattering model. Experimental results show that the proposed method outperforms He et al.’s method in terms of haze removal.


Author(s):  
Hongyuan Zhu ◽  
Xi Peng ◽  
Joey Tianyi Zhou ◽  
Songfan Yang ◽  
Vijay Chanderasekh ◽  
...  

Single image rain-streak removal is an extremely challenging problem due to the presence of non-uniform rain densities in images. Previous works solve this problem using various hand-designed priors or by explicitly mapping synthetic rain to paired clean image in a supervised way. In practice, however, the pre-defined priors are easily violated and the paired training data are hard to collect. To overcome these limitations, in this work, we propose RainRemoval-GAN (RRGAN), the first end-to-end adversarial model that generates realistic rain-free images using only unpaired supervision. Our approach alleviates the paired training constraints by introducing a physical-model which explicitly learns a recovered images and corresponding rain-streaks from the differentiable programming perspective. The proposed network consists of a novel multiscale attention memory generator and a novel multiscale deeply supervised discriminator. The multiscale attention memory generator uses a memory with attention mechanism to capture the latent rain streaks context at different stages to recover the clean images. The deeply supervised multiscale discriminator imposes constraints at the recovered output in terms of local details and global appearance to the clean image set. Together with the learned rainstreaks, a reconstruction constraint is employed to ensure the appearance consistent with the input image. Experimental results on public benchmark demonstrates our promising performance compared with nine state-of-the-art methods in terms of PSNR, SSIM, visual qualities and running time.


2013 ◽  
Vol 748 ◽  
pp. 600-604
Author(s):  
Yi Luo ◽  
Gui Ling Yao ◽  
Wei Fan Wang

In order to effectively ease and solve fusion effect and the contradiction of the algorithm complexity, this paper puts forward a fusion rule on rapid extraction of multi-scale fusion coefficient, this fusion rules first used in the source image multi-scale decomposition of the scale fusion is the extraction of coefficient based on the neighborhood the fusion of window way, the low frequency of the improved neighborhood entropy to extract matching measure (that is, between the input image similarity degree), high frequency with the cross scale neighborhood gradient to extract matching measure, and gives the fusion coefficient formula. Because of the wavelet transform has moved degeneration, this paper puts forward the application of double tree after wavelet transform to do image multi-scale decomposition.


2013 ◽  
Vol 423-426 ◽  
pp. 2026-2034 ◽  
Author(s):  
Hong Xing Gao ◽  
Mao Ru Chi ◽  
Min Hao Zhu ◽  
Ping Bo Wu

Three accurate dynamic model of air spring was set up through aerodynamics, fluid mechanics, structural mechanics,engineering thermodynamics, etc. According to the new established bellow-orifice-reservoir model, bellow-pipe-reservoir model and bellow-orifice-pipe-reservoir model, the dynamic characteristics of air spring were calculated under different excitation amplitudes and frequencies. By comparison with experimental results, it shows that the simulation results of the three models and experimental results coincide very well in dynamic characteristics; the bellow-orifice-pipe-reservoir connection type is recommended as the secondary suspension for low frequency excitations; and the bellow-orifice-reservoir connection type is considered effectively for high frequency excitations; the bellow-pipe-reservoir connection type is not recommended to be used as the secondary suspension because of its negative stiffness.


2017 ◽  
Vol 31 (19-21) ◽  
pp. 1740038
Author(s):  
Ruxi Xiang ◽  
Xifang Zhu ◽  
Feng Wu

In this paper, a novel method named Haze Removal based on Two Steps (HRTS) for removing the haze has been proposed based on two steps, which obviously improves the image qualities such as color and visibility caused by haze. The proposed method mainly consists of two steps: the preprocessing step by decomposing the input image to reduce the influence of ambient light and the removed haze step for restoring the radiance. We first reduce the effect of the ambient light by decomposing the haze image, estimate the transmission map based on the result of the decomposition, and then use the modified guided filter method to refine it. Finally, the monochrome atmospheric scattering model is combined to restore the radiance image. Experimental results show that the proposed method could effectively remove the haze and obviously improve the color and visibility of the image in the realistic scenes by comparing other existing haze removal methods.


Sign in / Sign up

Export Citation Format

Share Document