white balancing
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 11)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 7 (10) ◽  
pp. 207
Author(s):  
Teruaki Akazawa ◽  
Yuma Kinoshita ◽  
Sayaka Shiota ◽  
Hitoshi Kiya

This paper presents a three-color balance adjustment for color constancy correction. White balancing is a typical adjustment for color constancy in an image, but there are still lighting effects on colors other than white. Cheng et al. proposed multi-color balancing to improve the performance of white balancing by mapping multiple target colors into corresponding ground truth colors. However, there are still three problems that have not been discussed: choosing the number of target colors, selecting target colors, and minimizing error which causes computational complexity to increase. In this paper, we first discuss the number of target colors for multi-color balancing. From our observation, when the number of target colors is greater than or equal to three, the best performance of multi-color balancing in each number of target colors is almost the same regardless of the number of target colors, and it is superior to that of white balancing. Moreover, if the number of target colors is three, multi-color balancing can be performed without any error minimization. Accordingly, we propose three-color balancing. In addition, the combination of three target colors is discussed to achieve color constancy correction. In an experiment, the proposed method not only outperforms white balancing but also has almost the same performance as Cheng’s method with 24 target colors.


2020 ◽  
Vol 29 (1) ◽  
pp. 79-95
Author(s):  
Izabella Antoniuk ◽  
Artur Krupa ◽  
Radosław Roszczyk

The acquisition of accurately coloured, balanced images in an optical microscope can be a challenge even for experienced microscope operators. This article presents an entirely automatic mechanism for balancing the white level that allows the correction of the microscopic colour images adequately. The results of the algorithm have been confirmed experimentally on a set of two hundred microscopic images. The images contained scans of three microscopic specimens commonly used in pathomorphology. Also, the results achieved were compared with other commonly used white balance algorithms in digital photography. The algorithm applied in this work is more effective than the classical algorithms used in colour photography for microscopic images stained with hematoxylin-phloxine-saffron and for immunohistochemical staining images.


Optik ◽  
2020 ◽  
Vol 209 ◽  
pp. 164260 ◽  
Author(s):  
Mehwish Iqbal ◽  
Syed Sohaib Ali ◽  
Muhammad Mohsin Riaz ◽  
Abdul Ghafoor ◽  
Attiq Ahmad

Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1390
Author(s):  
Minseok Oh ◽  
Sergey Velichko ◽  
Scott Johnson ◽  
Michael Guidash ◽  
Hung-Chih Chang ◽  
...  

We present and discuss parameters of a high dynamic range (HDR) image sensor with LED flicker mitigation (LFM) operating in automotive temperature range. The total SNR (SNR including dark fixed pattern noise), of the sensor is degraded by floating diffusion (FD) dark current (DC) and dark signal non-uniformity (DSNU). We present results of FD DC and DSNU reduction, to provide required SNR versus signal level at temperatures up to 120 °C. Additionally we discuss temperature dependencies of quantum efficiency (QE), sensitivity, color effects, and other pixel parameters for backside illuminated image sensors. Comparing +120 °C junction vs. room temperature, in visual range we measured a few relative percent increase, while in 940 nm band range we measured 1.46x increase in sensitivity. Measured change of sensitivity for visual bands—such as blue, green, and red colors—reflected some impact to captured image color accuracy that created slight image color tint at high temperature. The tint is, however, hard to detect visually and may be removed by auto white balancing and temperature adjusted color correction matrixes.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2019 ◽  
Vol 2019 (1) ◽  
pp. 339-343
Author(s):  
Taesu Kim ◽  
Eunjin Kim ◽  
Hyeon-Jeong Suk

This study proposes an illuminant estimation method that reproduces the original illuminant of a scene using a mobile display as a target. The original lighting environment of an auto white-balancing (AWB) photograph is obtained through reverse calibration, using the white point of a display in the photograph. This reproduces the photograph before AWB processed, and we can obtain the illuminant information using Gray World computation. The study consists of two sessions. In Session 1, we measured the display's white points under varying illuminants to prove that display colors show limited changes under any light conditions. Then, in Session 2, we generated the estimations and assessed the performance of display-based illuminant estimation by comparing the result with the optically measured values in the real situation. Overall, the proposed method is a satisfactory way to estimate the less chromatic illuminants under 6300 K that we experience as indoor light in our daily lives.


The method proposed in this paper is efficient and does not need any external hardware nor does it need information regarding the conditions and structure of the scene which is to be enhanced after being captured in underwater as it is a degraded image due to presence of scattering and absorption intensity due to underwater particles and light. Two images, which are obtained after white balancing and color compensation of the degraded input image, are combined and the weight maps of the two images help in enhancement of edges and color contrast of the final output image. The creation of weight maps lead to the low frequency component of the recombined image having artifacts in it. Multiscale fusion technique has also been adopted in this paper. On making a comparative analysis, it was observed that the proposedmethod leads to enhancement of underwater images and videos in such a way that the dark regions exposition is improved, global contrast is better and edges are sharper.


Capturing underwater images can be considered as form of art and collecting data of underwater environment which plays major role in the field of marine biology research, marine zoology and ecology study. It also plays a significant role in scientific missions such as analyzing marine life species, taking census of population and monitoring underwater biological environment. Underwater imaging also offers many other attractions such as aquatic plants and animals, different types and species of fishes, shipwrecks, coral reef and beautiful landscapes. But the captured images lack contrast and are hazy due to absorption and scattering. Thus the quality of the image gets deteriorated. The degraded underwater images can be enhanced using different methods. This research work proposed the white balancing technique and image blending process to improve the degraded images. The image blending can be done using the methods of feature matching. The features can be matched using HOG and SIFT methods. The proposed technique is implemented in MATLAB and results are analyzed in the terms of PSNR and MSE.


Sign in / Sign up

Export Citation Format

Share Document