Color and Imaging Conference
Latest Publications


TOTAL DOCUMENTS

307
(FIVE YEARS 175)

H-INDEX

4
(FIVE YEARS 1)

Published By Society For Imaging Science & Technology

2166-9635

2021 ◽  
Vol 2021 (29) ◽  
pp. 19-24
Author(s):  
Yi-Tun Lin ◽  
Graham D. Finlayson

In Spectral Reconstruction (SR), we recover hyperspectral images from their RGB counterparts. Most of the recent approaches are based on Deep Neural Networks (DNN), where millions of parameters are trained mainly to extract and utilize the contextual features in large image patches as part of the SR process. On the other hand, the leading Sparse Coding method ‘A+’—which is among the strongest point-based baselines against the DNNs—seeks to divide the RGB space into neighborhoods, where locally a simple linear regression (comprised by roughly 102 parameters) suffices for SR. In this paper, we explore how the performance of Sparse Coding can be further advanced. We point out that in the original A+, the sparse dictionary used for neighborhood separations are optimized for the spectral data but used in the projected RGB space. In turn, we demonstrate that if the local linear mapping is trained for each spectral neighborhood instead of RGB neighborhood (and theoretically if we could recover each spectrum based on where it locates in the spectral space), the Sparse Coding algorithm can actually perform much better than the leading DNN method. In effect, our result defines one potential (and very appealing) upper-bound performance of point-based SR.


2021 ◽  
Vol 2021 (29) ◽  
pp. 160-165
Author(s):  
Mark D. Fairchild

A digital color appearance test chart, akin to a ColorChecker® Chart for human perception, was developed and evaluated both perceptually and computationally. The chart allows an observer to adjust the appearance of a limited number of color patches to allow a quick evaluation of perceived brightness, colorfulness, lightness, saturation, and hue on a display. The resulting data can then be used to compared observed results with the predictions of various color appearance models. Analyses in this paper highlight some known shortcomings of CIELAB, CIECAM02, and CAM16. Differences between CIECAM02 and CAM16 are also highlighted. This paper does not provide new psychophysical data for model testing, it simply describes a technique to generate such data and a computational comparison of models.


2021 ◽  
Vol 2021 (29) ◽  
pp. 258-263
Author(s):  
Marius Pedersen ◽  
Seyed Ali Amirshahi

Over the years, a high number of different objective image quality metrics have been proposed. While some image quality metrics show a high correlation with subjective scores provided in different datasets, there still exists room for improvement. Different studies have pointed to evaluating the quality of images affected by geometrical distortions as a challenge for current image quality metrics. In this work, we introduce the Colourlab Image Database: Geometric Distortions (CID:GD) with 49 different reference images made specifically to evaluate image quality metrics. CID:GD is one of the first datasets which include three different types of geometrical distortions; seam carving, lens distortion, and image rotation. 35 state-ofthe-art image quality metrics are tested on this dataset, showing that apart from a handful of these objective metrics, most are not able to show a high performance. The dataset is available at <ext-link ext-link-type="url" xlink:href="http://www.colourlab.no/cid">www.colourlab.no/cid</ext-link>.


2021 ◽  
Vol 2021 (29) ◽  
pp. 323-327
Author(s):  
Ali Alsam ◽  
Hans Jakob Rivertz

A fast, spatially adaptive filter for smoothing colour images while preserving edges is proposed. To preserve the edges, we use a constraint that prohibits the increasing of the gradients in the process of diffusion. This constraint is shown to be very effective in preserving details and flexible in cases where more smoothing is desired. In addition, a filter of exponentially increasing diameter is used to allow averaging non-adjacent pixels, including those separated by strong edges.


2021 ◽  
Vol 2021 (29) ◽  
pp. 193-196
Author(s):  
  Anku ◽  
Susan P. Farnand

White balance is one of the key processes in a camera pipeline. Accuracy can be challenging when a scene is illuminated by multiple color light sources. We designed and built a studio which consisted of a controllable multiple LED light sources that produced a range of correlated color temperatures (CCTs) with high color fidelity that were used to illuminate test scenes. A two Alternative Forced Choice (2AFC) experiment was performed to evaluate the white balance appearance preference for images containing a model in the foreground and target objects in the background indoor scene. The foreground and background were lit by different combinations of cool to warm sources. The observers were asked to pick the one that was most aesthetically appealing to them. The results show that when the background is warm, the skin tones dominated observers' decisions and when the background is cool the preference shifts to scenes with same foreground and background CCT. The familiarity and unfamiliarity of objects in the background scene did not show a significant effect.


2021 ◽  
Vol 2021 (29) ◽  
pp. 83-88
Author(s):  
Sahar Azimian ◽  
Farah Torkamani Azar ◽  
Seyed Ali Amirshahi

For a long time different studies have focused on introducing new image enhancement techniques. While these techniques show a good performance and are able to increase the quality of images, little attention has been paid to how and when overenhancement occurs in the image. This could possibly be linked to the fact that current image quality metrics are not able to accurately evaluate the quality of enhanced images. In this study we introduce the Subjective Enhanced Image Dataset (SEID) in which 15 observers are asked to enhance the quality of 30 reference images which are shown to them once at a low and another time at a high contrast. Observers were instructed to enhance the quality of the images to the point that any more enhancement will result in a drop in the image quality. Results show that there is an agreement between observers on when over-enhancement occurs and this point is closely similar no matter if the high contrast or the low contrast image is enhanced.


2021 ◽  
Vol 2021 (29) ◽  
pp. 136-140
Author(s):  
Dorukalp Durmus

The quality of building electric lighting systems can be assessed using color rendition metrics. However, color rendition metrics are limited in quantifying tunable solid-state light sources, since tunable lighting systems can generate a vast number of different white light spectra, providing flexibility in terms of color quality and energy efficiency. Previous research suggests that color rendition is multi-dimensional in nature, and it cannot be simplified to a single number. Color shifts under a test light source in comparison to a reference illuminant, changes in color gamut, and color discrimination are important dimensions of the quality of electric light sources, which are not captured by a single-numbered metric. To address the challenges in color rendition characterization of modern solid-state light sources, the development of a multi-dimensional color rendition space is proposed. The proposed continuous measure can quantify the change in color rendition ability of tunable solid-state light devices with caveats. Future work, discretization of the continuous color rendition space, will be carried out to address the shortcomings of a continuous three-dimensional space.


2021 ◽  
Vol 2021 (29) ◽  
pp. 317-322
Author(s):  
Gregory High ◽  
Peter Nussbaum ◽  
Phil Green

Images reproduced for different output devices are known to be limited in the range of colours that can be reproduced. It is accepted that reproductions made with different print processes, and on different substrates, will not match, although the overall reproduction appearance can be optimized using an output rendering. However, the question remains: how different are they visually? This paper reports on a pilot study that tests whether visual difference can be reduced to a single dimensional scale using magnitude estimation. Subject to recent Covid restrictions, the experiment was moved from the lab to an online delivery. We compare the two methods of delivery: in-person under controlled viewing conditions, and online via a web-based interface where viewing conditions are unknown.


2021 ◽  
Vol 2021 (29) ◽  
pp. 1-6
Author(s):  
Yuteng Zhu ◽  
Graham D. Finlayson

Previously improved color accuracy of a given digital camera was achieved by carefully designing the spectral transmittance of a color filter to be placed in front of the camera. Specifically, the filter is designed in a way that the spectral sensitivities of the camera after filtering are approximately linearly related to the color matching functions (or tristimulus values) of the human visual system. To avoid filters that absorbed too much light, the optimization could incorporate a minimum per wavelength transmittance constraint. In this paper, we change the optimization so that the overall filter transmittance is bounded, i.e. we solve for the filter that (for a uniform white light) transmits (say) 50% of the light. Experiments demonstrate that these filters continue to solve the color correction problem (they make cameras much more colorimetric). Significantly, the optimal filters by restraining the average transmittance can deliver a further 10% improvement in terms of color accuracy compared to the prior art of bounding the low transmittance.


2021 ◽  
Vol 2021 (29) ◽  
pp. 381-386
Author(s):  
Xu Qiang ◽  
Muhammad Safdar ◽  
Ming Ronnier Luo

Two colour appearance models based UCSs, CAM16-UCS and ZCAM-QMh, were tested using HDR, WCG and COMBVD datasets. As a comparison, two widely used UCSs, CIELAB and ICTCP, were tested. Metrics of the STRESS and correlation coefficient between predicted colour differences and visual differences, together with local and global uniformity based on their chromatic discrimination ellipses, were applied to test models' performance. The two UCSs give similar performance. The luminance parametric factor kL, and power factor γ, were introduced to optimize colour-difference models. Factors kL and γ of 0.75 and 0.5, gave marked improvement to predict the HDR dataset. Factor kL of 0.3 gave significant improvement in the test of WCG dataset. In the test of COMBVD dataset, optimization provide very limited improvement.


Sign in / Sign up

Export Citation Format

Share Document