Multi-exposure fusion for welding region based on multi-scale transform and hybrid weight

2018 ◽  
Vol 101 (1-4) ◽  
pp. 105-117
Author(s):  
Haiyong Chen ◽  
Yafei Ren ◽  
Junqi Cao ◽  
Weipeng Liu ◽  
Kun Liu
Keyword(s):  
Optik ◽  
2020 ◽  
Vol 223 ◽  
pp. 165494 ◽  
Author(s):  
Yadong Xu ◽  
Beibei Sun

Author(s):  
Fei Kou ◽  
Zhengguo Li ◽  
Changyun Wen ◽  
Weihai Chen

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 24
Author(s):  
Yan-Tsung Peng ◽  
He-Hao Liao ◽  
Ching-Fu Chen

In contrast to conventional digital images, high-dynamic-range (HDR) images have a broader range of intensity between the darkest and brightest regions to capture more details in a scene. Such images are produced by fusing images with different exposure values (EVs) for the same scene. Most existing multi-scale exposure fusion (MEF) algorithms assume that the input images are multi-exposed with small EV intervals. However, thanks to emerging spatially multiplexed exposure technology that can capture an image pair of short and long exposure simultaneously, it is essential to deal with two-exposure image fusion. To bring out more well-exposed contents, we generate a more helpful intermediate virtual image for fusion using the proposed Optimized Adaptive Gamma Correction (OAGC) to have better contrast, saturation, and well-exposedness. Fusing the input images with the enhanced virtual image works well even though both inputs are underexposed or overexposed, which other state-of-the-art fusion methods could not handle. The experimental results show that our method performs favorably against other state-of-the-art image fusion methods in generating high-quality fusion results.


2017 ◽  
Vol 26 (3) ◽  
pp. 1243-1252 ◽  
Author(s):  
Zhengguo Li ◽  
Zhe Wei ◽  
Changyun Wen ◽  
Jinghong Zheng
Keyword(s):  

2020 ◽  
Vol 30 (8) ◽  
pp. 2418-2429 ◽  
Author(s):  
Qiantong Wang ◽  
Weihai Chen ◽  
Xingming Wu ◽  
Zhengguo Li

2021 ◽  
Vol 13 (2) ◽  
pp. 204
Author(s):  
Ting Nie ◽  
Liang Huang ◽  
Hongxing Liu ◽  
Xiansheng Li ◽  
Yuchen Zhao ◽  
...  

Existing multi-exposure fusion (MEF) algorithms for gray images under low-illumination cannot preserve details in dark and highlighted regions very well, and the fusion image noise is large. To address these problems, an MEF method is proposed. First, the latent low-rank representation (LatLRR) is used on low-dynamic images to generate low-rank parts and saliency parts to reduce noise after fusion. Then, two components are fused separately in Laplace multi-scale space. Two different weight maps are constructed according to features of gray images under low illumination. At the same time, an energy equation is designed to obtain the optimal ratio of different weight factors. An improved guided filtering based on an adaptive regularization factor is proposed to refine the weight maps to maintain spatial consistency and avoid artifacts. Finally, a high dynamic image is obtained by the inverse transform of low-rank part and saliency part. The experimental results show that the proposed method has advantages both in subjective and objective evaluation over state-of-the-art multi-exposure fusion methods for gray images under low-illumination imaging.


Author(s):  
Yanxiang Hu ◽  
Bo Zhang

A bio-inspired two-scale image complementarity evaluation method is proposed. This novel multi-scale method provides a promising alternative for the performance assessment of image fusion algorithms. Moreover, it can also be used to compare and analyze the multi-scale difference of raw images. Two metrics are presented and used to assess the complementarity of fusion images in non-subsampled contourlet transform (NSCT) domains: visual saliency differences (VSDs) at the coarse scales and detail similarities (DSs) at the fine scales. Visual attention mechanism (VAM)-based saliency maps are combined with NSCT low-pass subbands to compute the VSDs, and linear correlation and contrast consistency-based DSs are compared in NSCT band-pass subbands. Five main multi-scale transform (MST)-based fusion algorithms were compared by using 30 groups of raw images that consist of four types of fusion images. Effects of NSCT filters and decomposition levels on evaluation results are discussed in detail. Furthermore, a group of color multi-exposure fusion images were also taken as examples to evaluate the complementarity of raw images. Experimental results demonstrate the effectiveness of the proposed method, especially for MST-based image fusion algorithms.


2021 ◽  
Vol 2074 (1) ◽  
pp. 012024
Author(s):  
Jie Liu ◽  
Yuanyuan Peng

Abstract With the continuous development of social science and technology, people have higher and higher requirements for image quality. This paper integrates artificial intelligence technology and proposes a low-illuminance panoramic image enhancement algorithm based on simulated multi-exposure fusion. First, the image information content is used as a metric to estimate the optimal exposure rate, and the brightness mapping function is used to enhance the V component, and the low-illuminance. The image and the overexposed image are input, the medium exposure image is synthesized by the exposure interpolation method, and the low illumination image, the medium exposure image and the overexposure image are merged using a multi-scale fusion strategy to obtain the fused image, which is corrected by a multi-scale detail enhancement algorithm. After the fusion, the details are enhanced to obtain the final enhanced image. Practice has proved that the algorithm can effectively improve the image quality.


Sign in / Sign up

Export Citation Format

Share Document