London Imaging Meeting
Latest Publications


TOTAL DOCUMENTS

53
(FIVE YEARS 53)

H-INDEX

0
(FIVE YEARS 0)

Published By Society For Imaging Science & Technology

2694-118x

2021 ◽  
Vol 2021 (1) ◽  
pp. 68-72
Author(s):  
Ghalia Hemrit ◽  
Joseph Meehan

The aim of colour constancy is to discount the effect of the scene illumination from the image colours and restore the colours of the objects as captured under a ‘white’ illuminant. For the majority of colour constancy methods, the first step is to estimate the scene illuminant colour. Generally, it is assumed that the illumination is uniform in the scene. However, real world scenes have multiple illuminants, like sunlight and spot lights all together in one scene. We present in this paper a simple yet very effective framework using a deep CNN-based method to estimate and use multiple illuminants for colour constancy. Our approach works well in both the multi and single illuminant cases. The output of the CNN method is a region-wise estimate map of the scene which is smoothed and divided out from the image to perform colour constancy. The method that we propose outperforms other recent and state of the art methods and has promising visual results.


2021 ◽  
Vol 2021 (1) ◽  
pp. 78-82
Author(s):  
Pak Hung Chan ◽  
Georgina Souvalioti ◽  
Anthony Huggett ◽  
Graham Kirsch ◽  
Valentina Donzella

Video compression in automated vehicles and advanced driving assistance systems is of utmost importance to deal with the challenge of transmitting and processing the vast amount of video data generated per second by the sensor suite which is needed to support robust situational awareness. The objective of this paper is to demonstrate that video compression can be optimised based on the perception system that will utilise the data. We have considered the deployment of deep neural networks to implement object (i.e. vehicle) detection based on compressed video camera data extracted from the KITTI MoSeg dataset. Preliminary results indicate that re-training the neural network with M-JPEG compressed videos can improve the detection performance with compressed and uncompressed transmitted data, improving recalls and precision by up to 4% with respect to re-training with uncompressed data.


2021 ◽  
Vol 2021 (1) ◽  
pp. 5-10
Author(s):  
Chahine Nicolas ◽  
Belkarfa Salim

In this paper, we propose a novel and standardized approach to the problem of camera-quality assessment on portrait scenes. Our goal is to evaluate the capacity of smartphone front cameras to preserve texture details on faces. We introduce a new portrait setup and an automated texture measurement. The setup includes two custom-built lifelike mannequin heads, shot in a controlled lab environment. The automated texture measurement includes a Region-of-interest (ROI) detection and a deep neural network. To this aim, we create a realistic mannequins database, which contains images from different cameras, shot in several lighting conditions. The ground-truth is based on a novel pairwise comparison technology where the scores are generated in terms of Just-Noticeable-differences (JND). In terms of methodology, we propose a Multi-Scale CNN architecture with random crop augmentation, to overcome overfitting and to get a low-level feature extraction. We validate our approach by comparing its performance with several baselines inspired by the Image Quality Assessment (IQA) literature.


2021 ◽  
Vol 2021 (1) ◽  
pp. 43-48
Author(s):  
Mekides Assefa Abebe

Exposure problems, due to standard camera sensor limitations, often lead to image quality degradations such as loss of details and change in color appearance. The quality degradations further hiders the performances of imaging and computer vision applications. Therefore, the reconstruction and enhancement of uderand over-exposed images is essential for various applications. Accordingly, an increasing number of conventional and deep learning reconstruction approaches have been introduced in recent years. Most conventional methods follow color imaging pipeline, which strongly emphasize on the reconstructed color and content accuracy. The deep learning (DL) approaches have conversely shown stronger capability on recovering lost details. However, the design of most DL architectures and objective functions don’t take color fidelity into consideration and, hence, the analysis of existing DL methods with respect to color and content fidelity will be pertinent. Accordingly, this work presents performance evaluation and results of recent DL based overexposure reconstruction solutions. For the evaluation, various datasets from related research domains were merged and two generative adversarial networks (GAN) based models were additionally adopted for tone mapping application scenario. Overall results show various limitations, mainly for severely over-exposed contents, and a promising potential for DL approaches, GAN, to reconstruct details and appearance.


2021 ◽  
Vol 2021 (1) ◽  
pp. 16-20
Author(s):  
Apostolia Tsirikoglou ◽  
Marcus Gladh ◽  
Daniel Sahlin ◽  
Gabriel Eilertsen ◽  
Jonas Unger

This paper presents an evaluation of how data augmentation and inter-class transformations can be used to synthesize training data in low-data scenarios for single-image weather classification. In such scenarios, augmentations is a critical component, but there is a limit to how much improvements can be gained using classical augmentation strategies. Generative adversarial networks (GAN) have been demonstrated to generate impressive results, and have also been successful as a tool for data augmentation, but mostly for images of limited diversity, such as in medical applications. We investigate the possibilities in using generative augmentations for balancing a small weather classification dataset, where one class has a reduced number of images. We compare intra-class augmentations by means of classical transformations as well as noise-to-image GANs, to interclass augmentations where images from another class are transformed to the underrepresented class. The results show that it is possible to take advantage of GANs for inter-class augmentations to balance a small dataset for weather classification. This opens up for future work on GAN-based augmentations in scenarios where data is both diverse and scarce.


2021 ◽  
Vol 2021 (1) ◽  
pp. 63-67
Author(s):  
Simone Bianco ◽  
Marco Buzzelli

In this article we show the change in paradigm occurred in color constancy algorithms: from a pre-processing step in image understanding, to the exploitation of image understanding and computer vision results and techniques. Since color constancy is an ill-posed problem, we give an overview of the assumptions on which classical color constancy algorithms are based in order to solve it. Then, we chronologically review the color constancy algorithms that exploit results and techniques borrowed from the image understanding research field in order to exploit assumptions that could be met in a larger number of images.


2021 ◽  
Vol 2021 (1) ◽  
pp. 21-26
Author(s):  
Abderrezzaq Sendjasni ◽  
Mohamed-Chaker Larabi ◽  
Faouzi Alaya Cheikh

360-degree Image quality assessment (IQA) is facing the major challenge of lack of ground-truth databases. This problem is accentuated for deep learning based approaches where the performances are as good as the available data. In this context, only two databases are used to train and validate deep learning-based IQA models. To compensate this lack, a dataaugmentation technique is investigated in this paper. We use visual scan-path to increase the learning examples from existing training data. Multiple scan-paths are predicted to account for the diversity of human observers. These scan-paths are then used to select viewports from the spherical representation. The results of the data-augmentation training scheme showed an improvement over not using it. We also try to answer the question of using the MOS obtained for the 360-degree image as the quality anchor for the whole set of extracted viewports in comparison to 2D blind quality metrics. The comparison showed the superiority of using the MOS when adopting a patch-based learning.


2021 ◽  
Vol 2021 (1) ◽  
pp. 88-92
Author(s):  
Oliver van Zwanenberg ◽  
Sophie Triantaphillidou ◽  
Robin B. Jenkin ◽  
Alexandra Psarrou

The edge-based Spatial Frequency Response (e-SFR) is an established measure for camera system quality performance, traditionally measured under laboratory conditions. With the increasing use of Deep Neural Networks (DNNs) in autonomous vision systems, the input signal quality becomes crucial for optimal operation. This paper proposes a method to estimate the system e-SFR from pictorial natural scene derived SFRs (NSSFRs) as previously presented, laying the foundation for adapting the traditional method to a real-time measure.In this study, the NS-SFR input parameter variations are first investigated to establish suitable ranges that give a stable estimate. Using the NS-SFR framework with the established parameter ranges, the system e-SFR, as per ISO 12233, is estimated. Initial validation of results is obtained from implementing the measuring framework with images from a linear and a non-linear camera system. For the linear system, results closely approximate the ISO 12233 e-SFR measurement. Non-linear system measurements exhibit scene-dependant characteristics expected from edge-based methods. The requirements to implement this method in real-time for autonomous systems are then discussed.


2021 ◽  
Vol 2021 (1) ◽  
pp. 1-4
Author(s):  
Seyed Ali Amirshahi

Quality assessment of images plays an important role in different applications in image processing and computer vision. While subjective quality assessment of images is the most accurate approach due to issues objective quality metrics have been the go to approach. Until recently most such metrics have taken advantage of different handcrafted features. Similar (but with a slower speed) to other applications in image processing and computer vision, different machine learning techniques, more specifically Convolutional Neural Networks (CNNs) have been introduced in different tasks related to image quality assessment. In this short paper which is a supplement to a focal talk given with the same title at the London Imaging Meeting (LIM) 2021 we aim to provide a short timeline on how CNNs have been used in the field of image quality assessment so far, how the field could take advantage of CNNs to evaluate the image quality, and what we expect will happen in the near future.


2021 ◽  
Vol 2021 (1) ◽  
pp. 49-53
Author(s):  
Mirko Agarla ◽  
Luigi Celona

Blind assessment of video quality is a widely covered topic in computer vision. In this work, we perform an analysis of how much the effectiveness of some of the current No-Reference VQA (NR-VQA) methods varies with respect to specific types of scenes. To this end, we automatically annotated the videos from two video quality datasets with user-generated videos whose content is unknown and then estimated the correlation for the different categories of scenes. The results of the analysis highlight that the prediction errors are not equally distributed among the different categories of scenes and indirectly suggest what next generation NR-VQA methods should take into account and model.


Sign in / Sign up

Export Citation Format

Share Document