scholarly journals Colour Constancy for Image of Non-Uniformly Lit Scenes

Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2242
Author(s):  
Md Akmol Hussain ◽  
Akbar Sheikh-Akbari ◽  
Iosif Mporas

Digital camera sensors are designed to record all incident light from a captured scene, but they are unable to distinguish between the colour of the light source and the true colour of objects. The resulting captured image exhibits a colour cast toward the colour of light source. This paper presents a colour constancy algorithm for images of scenes lit by non-uniform light sources. The proposed algorithm uses a histogram-based algorithm to determine the number of colour regions. It then applies the K-means++ algorithm on the input image, dividing the image into its segments. The proposed algorithm computes the Normalized Average Absolute Difference (NAAD) for each segment and uses it as a measure to determine if the segment has sufficient colour variations. The initial colour constancy adjustment factors for each segment with sufficient colour variation is calculated. The Colour Constancy Adjustment Weighting Factors (CCAWF) for each pixel of the image are determined by fusing the CCAWFs of the segments, weighted by their normalized Euclidian distance of the pixel from the center of the segments. Results show that the proposed method outperforms the statistical techniques and its images exhibit significantly higher subjective quality to those of the learning-based methods. In addition, the execution time of the proposed algorithm is comparable to statistical-based techniques and is much lower than those of the state-of-the-art learning-based methods.


2020 ◽  
Vol 2020 (11) ◽  
pp. 234-1-234-6
Author(s):  
Nicolai Behmann ◽  
Holger Blume

LED flicker artefacts, caused by unsynchronized irradiation from a pulse-width modulated LED light source captured by a digital camera sensor with discrete exposure times, place new requirements for both visual and machine vision systems. While latter need to capture relevant information from the light source only in a limited number of frames (e.g. a flickering traffic light), human vision is sensitive to illumination modulation in viewing applications, e.g. digital mirror replacement systems. In order to quantify flicker in viewing applications with KPIs related to human vision, we present a novel approach and results of a psychophysics study on the effect of LED flicker artefacts. Diverse real-world driving sequences have been captured with both mirror replacement cameras and a front viewing camera and potential flicker light sources have been masked manually. Synthetic flicker with adjustable parameters is then overlaid on these areas and the flickering sequences are presented to test persons in a driving environment. Feedback from the testers on flicker perception in different viewing areas, sizes and frequencies are collected and evaluated.



Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 190-190 ◽  
Author(s):  
H Irtel

Most theories of colour constancy assume a flat coloured surface and a single homogenous light source. Natural situations, however, are 3-dimensional (3-D), are hardly ever restricted to a single light source, and object illumination is never homogenous. Here, two special cases of secondary light sources with sharp boundaries were simulated on a computer screen: a house-like 3-D object with colour patches in sunlight and shadow, and a Mondrian-type pattern with a coloured transparency covering some of the colour patches. Subjects made ‘paper’-matches between colour patches in light and shadow and between patches under the transparency and without the transparency. Matching did not depend on whether the simulated lighting condition was natural (yellow light, blue shadow) or artificial (green light, magenta shadow). Patches under a coloured transparency produced lightness constancy but subjects could not discount chromaticity shifts induced by the transparency. The number of context patches (2 vs 6) made no difference, and it made no difference whether the transparency covered the Mondrian completely or only partially. These results indicate that subjects were not able to use local contrast cues at sharp illumination boundaries to discount for the illuminant.



2019 ◽  
pp. 101-107
Author(s):  
Sergei A. Stakharny

This article is a review of the new light source – organic LEDs having prospects of application in general and special lighting systems. The article describes physical principles of operation of organic LEDs, their advantages and principal differences from conventional non-organic LEDs and other light sources. Also the article devoted to contemporary achievements and prospects of development of this field in the spheres of both general and museum lighting as well as other spheres where properties of organic LEDs as high-quality light sources may be extremely useful.



2020 ◽  
Vol 34 (03) ◽  
pp. 2594-2601
Author(s):  
Arjun Akula ◽  
Shuai Wang ◽  
Song-Chun Zhu

We present CoCoX (short for Conceptual and Counterfactual Explanations), a model for explaining decisions made by a deep convolutional neural network (CNN). In Cognitive Psychology, the factors (or semantic-level features) that humans zoom in on when they imagine an alternative to a model prediction are often referred to as fault-lines. Motivated by this, our CoCoX model explains decisions made by a CNN using fault-lines. Specifically, given an input image I for which a CNN classification model M predicts class cpred, our fault-line based explanation identifies the minimal semantic-level features (e.g., stripes on zebra, pointed ears of dog), referred to as explainable concepts, that need to be added to or deleted from I in order to alter the classification category of I by M to another specified class calt. We argue that, due to the conceptual and counterfactual nature of fault-lines, our CoCoX explanations are practical and more natural for both expert and non-expert users to understand the internal workings of complex deep learning models. Extensive quantitative and qualitative experiments verify our hypotheses, showing that CoCoX significantly outperforms the state-of-the-art explainable AI models. Our implementation is available at https://github.com/arjunakula/CoCoX



2021 ◽  
Vol 11 (9) ◽  
pp. 4035
Author(s):  
Jinsheon Kim ◽  
Jeungmo Kang ◽  
Woojin Jang

In the case of light-emitting diode (LED) seaport luminaires, they should be designed in consideration of glare, average illuminance, and overall uniformity. Although it is possible to implement light distribution through auxiliary devices such as reflectors, it means increasing the weight and size of the luminaire, which reduces the feasibility. Considering the special environment of seaport luminaires, which are installed at a height of 30 m or more, it is necessary to reduce the weight of the device, facilitate replacement, and secure a light source with a long life. In this paper, an optimized lens design was investigated to provide uniform light distribution to meet the requirement in the seaport lighting application. Four types of lens were designed and fabricated to verify the uniform light distribution requirement for the seaport lighting application. Using numerical analysis, we optimized the lens that provides the required minimum overall uniformity for the seaport lighting application. A theoretical analysis for the heatsink structure and shape were conducted to reduce the heat from the high-power LED light sources up to 250 W. As a result of these analyses on the heat dissipation characteristics of the high-power LED light source used in the LED seaport luminaire, the heatsink with hexagonal-shape fins shows the best heat dissipation effect. Finally, a prototype LED seaport luminaire with an optimized lens and heat sink was fabricated and tested in a real seaport environment. The light distribution characteristics of this prototype LED seaport luminaire were compared with a commercial high-pressure sodium luminaire and metal halide luminaire.



2021 ◽  
Vol 11 (12) ◽  
pp. 5383
Author(s):  
Huachen Gao ◽  
Xiaoyu Liu ◽  
Meixia Qu ◽  
Shijie Huang

In recent studies, self-supervised learning methods have been explored for monocular depth estimation. They minimize the reconstruction loss of images instead of depth information as a supervised signal. However, existing methods usually assume that the corresponding points in different views should have the same color, which leads to unreliable unsupervised signals and ultimately damages the reconstruction loss during the training. Meanwhile, in the low texture region, it is unable to predict the disparity value of pixels correctly because of the small number of extracted features. To solve the above issues, we propose a network—PDANet—that integrates perceptual consistency and data augmentation consistency, which are more reliable unsupervised signals, into a regular unsupervised depth estimation model. Specifically, we apply a reliable data augmentation mechanism to minimize the loss of the disparity map generated by the original image and the augmented image, respectively, which will enhance the robustness of the image in the prediction of color fluctuation. At the same time, we aggregate the features of different layers extracted by a pre-trained VGG16 network to explore the higher-level perceptual differences between the input image and the generated one. Ablation studies demonstrate the effectiveness of each components, and PDANet shows high-quality depth estimation results on the KITTI benchmark, which optimizes the state-of-the-art method from 0.114 to 0.084, measured by absolute relative error for depth estimation.



2010 ◽  
Vol 2010 ◽  
pp. 1-9 ◽  
Author(s):  
Andrew Chalmers ◽  
Snjezana Soltic

This paper is concerned with designing light source spectra for optimum luminous efficacy and colour rendering. We demonstrate that it is possible to design light sources that can provide both good colour rendering and high luminous efficacy by combining the outputs of a number of narrowband spectral constituents. Also, the achievable results depend on the numbers and wavelengths of the different spectral bands utilized in the mixture. Practical realization of these concepts has been demonstrated in this pilot study which combines a number of simulations with tests using real LEDs (light emitting diodes). Such sources are capable of providing highly efficient lighting systems with good energy conservation potential. Further research is underway to investigate the practicalities of our proposals in relation to large-scale light source production.



Author(s):  
Wenxuan Jia ◽  
Yuen-Shan Leung ◽  
Huachao Mao ◽  
Han Xu ◽  
Chi Zhou ◽  
...  

Abstract Microscale surface structures are commonly found on macroscale bodies of natural creatures for their unique functions. However, it is difficult to fabricate such multi-scale geometry with conventional stereolithography processes that rely on either laser or digital micromirror device (DMD). More specifically, the DMD-based mask projection method displays the image of a cross-section of the part on the resin to fabricate the entire layer efficiently; however, its display resolution is limited by the building area. In comparison, the laser-based vector scanning method builds smooth features using a focused laser beam with desired beam-width resolution; however, it has less throughput for its sequential nature. In this paper, we studied the hybrid-light-source stereolithography process that integrates both optical light sources to facilitate the fabrication of macro-objects with microscale surface structures (called micro-textures in the paper). The hardware system uses a novel calibration approach that ensures pixel-level dimensional accuracy across the two light sources. The software system enables designing the distribution and density of specific microscale textures on a macro-object by generating projection images and laser toolpaths for the two integrated light sources. Several test cases were fabricated to demonstrate the capability of the developed process. A large fabrication area (76.8 mm × 80.0 mm) with 50 μm micro-features can be achieved with a high throughput.



2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Liyun Zhuang ◽  
Yepeng Guan

A novel image enhancement approach called entropy-based adaptive subhistogram equalization (EASHE) is put forward in this paper. The proposed algorithm divides the histogram of input image into four segments based on the entropy value of the histogram, and the dynamic range of each subhistogram is adjusted. A novel algorithm to adjust the probability density function of the gray level is proposed, which can adaptively control the degree of image enhancement. Furthermore, the final contrast-enhanced image is obtained by equalizing each subhistogram independently. The proposed algorithm is compared with some state-of-the-art HE-based algorithms. The quantitative results for a public image database named CVG-UGR-Database are statistically analyzed. The quantitative and visual assessments show that the proposed algorithm outperforms most of the existing contrast-enhancement algorithms. The proposed method can make the contrast of image more effectively enhanced as well as the mean brightness and details well preserved.



Sign in / Sign up

Export Citation Format

Share Document