scholarly journals Single Image Defogging Method Based on Image Patch Decomposition and Multi-Exposure Image Fusion

2021 ◽  
Vol 15 ◽  
Author(s):  
Qiuzhuo Liu ◽  
Yaqin Luo ◽  
Ke Li ◽  
Wenfeng Li ◽  
Yi Chai ◽  
...  

Bad weather conditions (such as fog, haze) seriously affect the visual quality of images. According to the scene depth information, physical model-based methods are used to improve image visibility for further image restoration. However, the unstable acquisition of the scene depth information seriously affects the defogging performance of physical model-based methods. Additionally, most of image enhancement-based methods focus on the global adjustment of image contrast and saturation, and lack the local details for image restoration. So, this paper proposes a single image defogging method based on image patch decomposition and multi-exposure fusion. First, a single foggy image is processed by gamma correction to obtain a set of underexposed images. Then the saturation of the obtained underexposed and original images is enhanced. Next, each image in the multi-exposure image set (including the set of underexposed images and the original image) is decomposed into the base and detail layers by a guided filter. The base layers are first decomposed into image patches, and then the fusion weight maps of the image patches are constructed. For detail layers, the exposure features are first extracted from the luminance components of images, and then the extracted exposure features are evaluated by constructing gaussian functions. Finally, both base and detail layers are combined to obtain the defogged image. The proposed method is compared with the state-of-the-art methods. The comparative experimental results confirm the effectiveness of the proposed method and its superiority over the state-of-the-art methods.

2018 ◽  
Vol 7 (02) ◽  
pp. 23578-23584
Author(s):  
Miss. Anjana Navale ◽  
Prof. Namdev Sawant ◽  
Prof. Umaji Bagal

Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we have used a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Wei Wang ◽  
Wenhui Li ◽  
Qingji Guan ◽  
Miao Qi

Removing the haze effects on images or videos is a challenging and meaningful task for image processing and computer vision applications. In this paper, we propose a multiscale fusion method to remove the haze from a single image. Based on the existing dark channel prior and optics theory, two atmospheric veils with different scales are first derived from the hazy image. Then, a novel and adaptive local similarity-based wavelet fusion method is proposed for preserving the significant scene depth property and avoiding blocky artifacts. Finally, the clear haze-free image is restored by solving the atmospheric scattering model. Experimental results demonstrate that the proposed method can yield comparative or even better results than several state-of-the-art methods by subjective and objective evaluations.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Wenjun Du ◽  
Bo Sun ◽  
Jiating Kuai ◽  
Jiemin Xie ◽  
Jie Yu ◽  
...  

Travel time is one of the most critical parameters in proactive traffic management and the deployment of advanced traveler information systems. This paper proposes a hybrid model named LSTM-CNN for predicting the travel time of highways by integrating the long short-term memory (LSTM) and the convolutional neural networks (CNNs) with the attention mechanism and the residual network. The highway is divided into multiple segments by considering the traffic diversion and the relative location of automatic number plate recognition (ANPR). There are four steps in this hybrid approach. First, the average travel time of each segment in each interval is calculated from ANPR and fed into LSTM in the form of a multidimensional array. Second, the attention mechanism is adopted to combine the hidden layer of LSTM with dynamic temporal weights. Third, the residual network is introduced to increase the network depth and overcome the vanishing gradient problem, which consists of three pairs of one-dimensional convolutional layers (Conv1D) and batch normalization (BatchNorm) with the rectified linear unit (ReLU) as the activation function. Finally, a series of Conv1D layers is connected to extract features further and reduce dimensionality. The proposed LSTM-CNN approach is tested on the three-month ANPR data of a real-world 39.25 km highway with four pairs of ANPR detectors of the uplink and downlink, Zhejiang, China. The experimental results indicate that LSTM-CNN learns spatial, temporal, and depth information better than the state-of-the-art traffic forecasting models, so LSTM-CNN can predict more accurate travel time. Moreover, LSTM-CNN outperforms the state-of-the-art methods in nonrecurrent prediction, multistep-ahead prediction, and long-term prediction. LSTM-CNN is a promising model with scalability and portability for highway traffic prediction and can be further extended to improve the performance of the advanced traffic management system (ATMS) and advanced traffic information system (ATIS).


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Bo Jiang ◽  
Wanxu Zhang ◽  
Jian Zhao ◽  
Yi Ru ◽  
Min Liu ◽  
...  

Combined with two different types of image dehazing strategies based on image enhancement and atmospheric physical model, respectively, a novel method for gray-scale image dehazing is proposed in this paper. For image-enhancement-based strategy, the characteristics of its simplicity, effectiveness, and no color distortion are preserved, and the common guided image filter is modified to match the application of image enhancement. Through wavelet decomposition, the high frequency boundary of original image is preserved in advance. Moreover, the process of image dehazing can be guided by the image of scene depth proportion directly estimated from the original gray-scale image. Our method has the advantages of brightness consistency and no distortion over the state-of-the-art methods based on atmospheric physical model. Particularly, our method overcomes the essential shortcoming of the abovementioned methods that are mainly working for color image. Meanwhile, an image of scene depth proportion is acquired as a byproduct of image dehazing.


2013 ◽  
Vol 409-410 ◽  
pp. 1653-1656 ◽  
Author(s):  
Yu Fan ◽  
Xue Feng Wu

Computational photography and image processing technology are used to restore the clearness of images taken in fog scenes autmatically.The technology is used to restore the clearness of the fog scene,which includes digital image processing and the physical model of atmospheric scattering.An algorithm is designed to restore the clearness of the fog scene under the assumption of the albedo images and then the resolution algorithm is analysised.The algorithm is implemented by the software of image process ,which can improve the efficiency of the algorithm and interface.The fog image and defogging image are compared, and the results show that the visibility of the image is improved, and the image restoration is more clearly .


2016 ◽  
Author(s):  
Tong Liu ◽  
Wei Song ◽  
Chao Du ◽  
Hanshi Wang ◽  
Lizhen Liu ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Zhou Fang ◽  
Qilin Wu ◽  
Darong Huang ◽  
Dashuai Guan

Dark channel prior (DCP) has been widely used in single image defogging because of its simple implementation and satisfactory performance. This paper addresses the shortcomings of the DCP-based defogging algorithm and proposes an optimized method by using an adaptive fusion mechanism. This proposed method makes full use of the smoothing and “squeezing” characteristics of the Logistic Function to obtain more reasonable dark channels avoiding further refining the transmission map. In addition, a maximum filtering on dark channels is taken to improve the accuracy of dark channels around the object boundaries and the overall brightness of the defogged clear images. Meanwhile, the location information and brightness information of fog image are weighed to obtain more accurate atmosphere light. Quantitative and qualitative comparisons show that the proposed method outperforms state-of-the-art image defogging algorithms.


Author(s):  
Hongyuan Zhu ◽  
Xi Peng ◽  
Joey Tianyi Zhou ◽  
Songfan Yang ◽  
Vijay Chanderasekh ◽  
...  

Single image rain-streak removal is an extremely challenging problem due to the presence of non-uniform rain densities in images. Previous works solve this problem using various hand-designed priors or by explicitly mapping synthetic rain to paired clean image in a supervised way. In practice, however, the pre-defined priors are easily violated and the paired training data are hard to collect. To overcome these limitations, in this work, we propose RainRemoval-GAN (RRGAN), the first end-to-end adversarial model that generates realistic rain-free images using only unpaired supervision. Our approach alleviates the paired training constraints by introducing a physical-model which explicitly learns a recovered images and corresponding rain-streaks from the differentiable programming perspective. The proposed network consists of a novel multiscale attention memory generator and a novel multiscale deeply supervised discriminator. The multiscale attention memory generator uses a memory with attention mechanism to capture the latent rain streaks context at different stages to recover the clean images. The deeply supervised multiscale discriminator imposes constraints at the recovered output in terms of local details and global appearance to the clean image set. Together with the learned rainstreaks, a reconstruction constraint is employed to ensure the appearance consistent with the input image. Experimental results on public benchmark demonstrates our promising performance compared with nine state-of-the-art methods in terms of PSNR, SSIM, visual qualities and running time.


Atmosphere ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 772
Author(s):  
Alexandra Duminil ◽  
Jean-Philippe Tarel ◽  
Roland Brémond

From an analysis of the priors used in state-of-the-art algorithms for single image defogging, a new prior is proposed to obtain a better atmospheric veil removal. Our hypothesis is based on a physical model, considering that the fog appears denser near the horizon rather than close to the camera. It leads to more restoration when the fog depth is more important, for a more natural rendering. For this purpose, the Naka–Rushton function is used to modulate the atmospheric veil according to empirical observations on synthetic foggy images. The parameters of this function are set from features of the input image. This method also prevents over-restoration and thus preserves the sky from artifacts and noises. The algorithm generalizes to different kinds of fog, airborne particles, and illumination conditions. The proposed method is extended to the nighttime and underwater images by computing the atmospheric veil on each color channel. Qualitative and quantitative evaluations show the benefit of the proposed algorithm. The quantitative evaluation shows the efficiency of the algorithm on four databases with different types of fog, which demonstrates the broad generalization allowed by the proposed algorithm, in contrast with most of the currently available deep learning techniques.


2020 ◽  
Vol 34 (07) ◽  
pp. 11165-11172 ◽  
Author(s):  
Xin Jin ◽  
Cuiling Lan ◽  
Wenjun Zeng ◽  
Zhibo Chen

Object re-identification (re-id) aims to identify a specific object across times or camera views, with the person re-id and vehicle re-id as the most widely studied applications. Re-id is challenging because of the variations in viewpoints, (human) poses, and occlusions. Multi-shots of the same object can cover diverse viewpoints/poses and thus provide more comprehensive information. In this paper, we propose exploiting the multi-shots of the same identity to guide the feature learning of each individual image. Specifically, we design an Uncertainty-aware Multi-shot Teacher-Student (UMTS) Network. It consists of a teacher network (T-net) that learns the comprehensive features from multiple images of the same object, and a student network (S-net) that takes a single image as input. In particular, we take into account the data dependent heteroscedastic uncertainty for effectively transferring the knowledge from the T-net to S-net. To the best of our knowledge, we are the first to make use of multi-shots of an object in a teacher-student learning manner for effectively boosting the single image based re-id. We validate the effectiveness of our approach on the popular vehicle re-id and person re-id datasets. In inference, the S-net alone significantly outperforms the baselines and achieves the state-of-the-art performance.


Sign in / Sign up

Export Citation Format

Share Document