Exposure fusion based on steerable pyramid for displaying high dynamic range scenes

2009 ◽  
Vol 48 (11) ◽  
pp. 117003 ◽  
Author(s):  
Jinhua Wang
2018 ◽  
Vol 11 (4) ◽  
pp. 2041-2049 ◽  
Author(s):  
Soumyabrata Dev ◽  
Florian M. Savoy ◽  
Yee Hui Lee ◽  
Stefan Winkler

Abstract. Sky–cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg – an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.


2017 ◽  
Author(s):  
Soumyabrata Dev ◽  
Florian M. Savoy ◽  
Yee Hui Lee ◽  
Stefan Winkler

Abstract. Sky/cloud images obtained from ground-based sky-cameras are usually captured using a fish-eye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is over-exposed, and the regions near the horizon are under-exposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRSeg – an effective method for cloud segmentation using High-Dynamic-Range (HDR) imaging based on multi-exposure fusion. We describe the HDR generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR images for cloud segmentation and achieves very good results.


2018 ◽  
Vol 8 (9) ◽  
pp. 1543
Author(s):  
Hua Shao ◽  
Gangyi Jiang ◽  
Mei Yu ◽  
Yang Song ◽  
Hao Jiang ◽  
...  

Due to sharp changes in local brightness in high dynamic range scenes, fused images obtained by the traditional multi-exposure fusion methods usually have an unnatural appearance resulting from halo artifacts. In this paper, we propose a halo-free multi-exposure fusion method based on sparse representation of gradient features for high dynamic range imaging. First, we analyze the cause of halo artifacts. Since the range of local brightness changes in high dynamic scenes may be far wider than the dynamic range of an ordinary camera, there are some invalid, large-amplitude gradients in the multi-exposure source images, so halo artifacts are produced in the fused image. Subsequently, by analyzing the significance of the local sparse coefficient in a luminance gradient map, we construct a local gradient sparse descriptor to extract local details of source images. Then, as an activity level measurement in the fusion method, the local gradient sparse descriptor is used to extract image features and remove halo artifacts when the source images have sharp local changes in brightness. Experimental results show that the proposed method obtains state-of-the-art performance in subjective and objective evaluation, particularly in terms of effectively eliminating halo artifacts.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Harbinder Singh ◽  
Vinay Kumar ◽  
Sunil Bhooshan

In this paper we propose a novel detail-enhancing exposure fusion approach using nonlinear translation-variant filter (NTF). With the captured Standard Dynamic Range (SDR) images under different exposure settings, first the fine details are extracted based on guided filter. Next, the base layers (i.e., images obtained from NTF) across all input images are fused using multiresolution pyramid. Exposure, contrast, and saturation measures are considered to generate a mask that guides the fusion process of the base layers. Finally, the fused base layer is combined with the extracted fine details to obtain detail-enhanced fused image. The goal is to preserve details in both very dark and extremely bright regions without High Dynamic Range Image (HDRI) representation and tone mapping step. Moreover, we have demonstrated that the proposed method is also suitable for the multifocus image fusion without introducing artifacts.


Sign in / Sign up

Export Citation Format

Share Document