Optimized Perceptual Tone Mapping for Contrast Enhancement of Images

2017 ◽  
Vol 27 (6) ◽  
pp. 1161-1170 ◽  
Author(s):  
Cheolkon Jung ◽  
Tingting Sun
2021 ◽  
Vol 40 (2) ◽  
pp. 1-15
Author(s):  
Minqi Wang ◽  
Emily A. Cooper

Dichoptic tone mapping methods aim to leverage stereoscopic displays to increase visual detail and contrast in images and videos. These methods, which have been called both binocular tone mapping and dichoptic contrast enhancement , selectively emphasize contrast differently in the two eyes’ views. The visual system integrates these contrast differences into a unified percept, which is theorized to contain more contrast overall than each eye’s view on its own. As stereoscopic displays become increasingly common for augmented and virtual reality (AR/VR), dichoptic tone mapping is an appealing technique for imaging pipelines. We sought to examine whether a standard photographic technique, exposure bracketing, could be modified to enhance contrast similarly to dichoptic tone mapping. While assessing the efficacy of this technique with user studies, we also re-evaluated existing dichoptic tone mapping methods. Across several user studies; however, we did not find evidence that either dichoptic tone mapping or dichoptic exposures consistently increased subjective image preferences. We also did not observe improvements in subjective or objective measures of detail visibility. We did find evidence that dichoptic methods enhanced subjective 3D impressions. Here, we present these results and evaluate the potential contributions and current limitations of dichoptic methods for applications in stereoscopic displays.


2011 ◽  
Vol 27 (2) ◽  
pp. 87 ◽  
Author(s):  
Nicolas Hautière ◽  
Jean-Philippe Tarel ◽  
Didier Aubert ◽  
Éric Dumont

The contrast of outdoor images acquired under adverse weather conditions, especially foggy weather, is altered by the scattering of daylight by atmospheric particles. As a consequence, differentmethods have been designed to restore the contrast of these images. However, there is a lack of methodology to assess the performances of the methods or to rate them. Unlike image quality assessment or image restoration areas, there is no easy way to have a reference image, which makes the problem not straightforward to solve. In this paper, an approach is proposed which consists in computing the ratio between the gradient of the visible edges between the image before and after contrast restoration. In this way, an indicator of visibility enhancement is provided based on the concept of visibility level, commonly used in lighting engineering. Finally, the methodology is applied to contrast enhancement assessment and to the comparison of tone-mapping operators.


2020 ◽  
Vol 34 (07) ◽  
pp. 11287-11295
Author(s):  
Soo Ye Kim ◽  
Jihyong Oh ◽  
Munchurl Kim

Joint learning of super-resolution (SR) and inverse tone-mapping (ITM) has been explored recently, to convert legacy low resolution (LR) standard dynamic range (SDR) videos to high resolution (HR) high dynamic range (HDR) videos for the growing need of UHD HDR TV/broadcasting applications. However, previous CNN-based methods directly reconstruct the HR HDR frames from LR SDR frames, and are only trained with a simple L2 loss. In this paper, we take a divide-and-conquer approach in designing a novel GAN-based joint SR-ITM network, called JSI-GAN, which is composed of three task-specific subnets: an image reconstruction subnet, a detail restoration (DR) subnet and a local contrast enhancement (LCE) subnet. We delicately design these subnets so that they are appropriately trained for the intended purpose, learning a pair of pixel-wise 1D separable filters via the DR subnet for detail restoration and a pixel-wise 2D local filter by the LCE subnet for contrast enhancement. Moreover, to train the JSI-GAN effectively, we propose a novel detail GAN loss alongside the conventional GAN loss, which helps enhancing both local details and contrasts to reconstruct high quality HR HDR results. When all subnets are jointly trained well, the predicted HR HDR results of higher quality are obtained with at least 0.41 dB gain in PSNR over those generated by the previous methods. The official Tensorflow code is available at https://github.com/JihyongOh/JSI-GAN.


Sign in / Sign up

Export Citation Format

Share Document