scene illumination
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 10)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 2021 (1) ◽  
pp. 68-72
Author(s):  
Ghalia Hemrit ◽  
Joseph Meehan

The aim of colour constancy is to discount the effect of the scene illumination from the image colours and restore the colours of the objects as captured under a ‘white’ illuminant. For the majority of colour constancy methods, the first step is to estimate the scene illuminant colour. Generally, it is assumed that the illumination is uniform in the scene. However, real world scenes have multiple illuminants, like sunlight and spot lights all together in one scene. We present in this paper a simple yet very effective framework using a deep CNN-based method to estimate and use multiple illuminants for colour constancy. Our approach works well in both the multi and single illuminant cases. The output of the CNN method is a region-wise estimate map of the scene which is smoothed and divided out from the image to perform colour constancy. The method that we propose outperforms other recent and state of the art methods and has promising visual results.


Author(s):  
Keun Ha Choi ◽  
SooHyun Kim

In this paper, we propose a novel method, illumination-invariant vegetation detection (IVD), to improve many aspects of agriculture for vision-based autonomous machines or robots. The proposed method derives new color feature functions from simultaneously modeling the spectral properties of the color camera and scene illumination. An experiment in which an image sample dataset was acquired under nature illumination, including various intensities, weather conditions, shadows and reflections, was performed. The results show that the proposed method (IVD) yields the highest performance with the lowest error and standard deviation and is superior to six typical methods. Our method has multiple strengths, including computational simplicity and uniformly high-accuracy image segmentation.


2020 ◽  
Author(s):  
Nikola Banić ◽  
Karlo Koščević ◽  
Marko Subašić ◽  
Sven Lončarić

Computational color constancy is used in almost all digital cameras to reduce the influence of scene illumination on object colors. Many of the highly accurate published illumination estimation methods use deep learning, which relies on large amounts of images with known ground-truth illuminations. Since the size of the appropriate publicly available training datasets is relatively small, data augmentation is often used also by simulating the appearance of a given image under another illumination. Still, there are practically no reports on any desired properties of such simulated images or on the limits of their usability. In this paper, several experiments for determining some of these properties are proposed and conducted by comparing the behavior of the simplest illumination estimation methods on images of the same scenes obtained under real illuminations and images obtained through data augmentation. The experimental results are presented and discussed.


As one of the underlying pixel-based illumination estimation algorithms, the White Patch algorithm is an algorithm for calculating the global illumination RGB value of an image based on the specific assumption that the maximum reflected light on the scene is chromatic. The algorithm is harsh on the assumptions of scene illumination, and many images are difficult to satisfy this assumption constraint. In this paper, we propose an improved White Patch image illumination estimation method. Firstly, the image patch is extracted by using sliding window method, we then use the white patch algorithm to estimate the illumination color value of each patch, and finally the kernel density estimation is adopted to obtain the overall illumination color value of the image. The experimental results show that the improved White Patch images illumination estimation method proposed to this paper performs better on the illumination estimation of natural illumination scene images.


The illumination estimation algorithm belongs to the field of color constancy, aiming to restoring the color of image through estimating the RGB of scene illumination. In different scenarios, the performance of a general algorithm varies greatly. If the scene can be predicted, it can be inferred that the scenarios related optimal algorithms is better than a general algorithm for estimating illumination. In this paper, a novel algorithm based on outdoor scene classification was proposed: firstly, the support vector machine (svm) classifiers was used to identify scene types , and then the scenarios related optimal algorithms was selected, finally used the RGB values of scene illumination were calculated.


2020 ◽  
Author(s):  
Juliano de Paula Gonçalves ◽  
Francisco de Assis de Carvalho Pinto ◽  
Daniel Marçal de Queiroz ◽  
Flora Maria de Melo Villar ◽  
Jayme G.A. Barbedo ◽  
...  

Measures of percent severity of visible symptoms or injuries caused by diseases or insect pests on plant organs are essential in plant health research. Current color thresholding digital imaging-methods are generally more accurate and reliable than visual estimates. However, these methods perform poorly when scene illumination and background are not uniform, conditions that can be overcome by convolutional neural networks (CNN) for semantic segmentation. In this study, we trained five CNN models for pixel level predictions in images of individual leaves exhibiting necrotic lesions and/or yellowing caused by the insect pest coffee leaf miner (CLM), and two fungal diseases: soybean rust (SBR) and wheat tan spot (WTS). Training was performed in 80% of images annotated for three classes: leaf background (B), healthy leaf (H) and injured leaf (I). Precision, recall, and Intersection over Union (IoU) metrics in the test image set were highest for B, followed by H and I classes, irrespective of the model. When the pixel-level predictions were used to estimate percent severity, Feature Pyramid Network (FPN), Unet and DeepLabv3+ (Xception) performed the best: concordance coefficients were greater than 0.95, 0.96 and 0.98 for CLM, SBR and WTS datasets, respectively. The other three models tended to misclassify healthy pixels as injured, leading to overestimation of percent severity. The accuracy of the predictions by CNN models were comparable with those obtained using a standard commercial software which requires manual adjustments that slows the process.


2020 ◽  
pp. 1-1
Author(s):  
Shahnewaz Ali ◽  
Yaqub Jonmohamadi ◽  
Yu Takeda ◽  
Jonathan Roberts ◽  
Ross Crawford ◽  
...  

2019 ◽  
Author(s):  
Jeffrey Nivitanont ◽  
Sean Crowell ◽  
Chris O'Dell ◽  
Eric Burgh ◽  
Gregory McGarragh ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document