illumination change
Recently Published Documents


TOTAL DOCUMENTS

108
(FIVE YEARS 17)

H-INDEX

13
(FIVE YEARS 2)

Author(s):  
Yucheng Wang ◽  
Xi Chen ◽  
Zhongjie Mao ◽  
Jia Yan

Previous research has shown that tracking algorithms cannot capture long-distance information and lead to the loss of the object when the object was deformed, the illumination changed, and the background was disturbed by similar objects. To remedy this, this article proposes an object-tracking method by introducing the Global Context attention module into the Multi-Domain Network (MDNet) tracker. This method can learn the robust feature representation of the object through the Global Context attention module to better distinguish the background from the object in the presence of interference factors. Extensive experiments on OTB2013, OTB2015, and UAV20L datasets show that the proposed method is significantly improved compared with MDNet and has competitive performance compared with more mainstream tracking algorithms. At the same time, the method proposed in this article achieves better results when the video sequence contains object deformation, illumination change, and background interference with similar objects.


2021 ◽  
Author(s):  
ANDO Shizutoshi

Deep facial recognition (FR) has reached very high accuracy on various demanding datasets and encourages successful real-world applications, even demonstrating strong tolerance to illumination change, which is commonly viewed as a major danger to FR systems. In the real world, however, illumination variance produced by a variety of lighting situations cannot be adequately captured by the limited facsimile. To this end, we first propose the physical model- based adversarial relighting attack (ARA) denoted as albedo- quotient-based adversarial relighting attack (AQ-ARA). It generates natural adversarial light under the physical lighting model and guidance of FR systems and synthesizes adversarially relighted face images. Moreover, we propose the auto-predictive adversarial relighting attack (AP-ARA) by training an adversarial relighting network (ARNet) to automatically predict the adversarial light in a one-step manner according to different input faces, allowing efficiency-sensitive applications . More importantly, we propose to transfer the above digital attacks to physical ARA (Phy- ARA) through a precise relighting device, making the estimated adversarial lighting condition reproducible in the real world. We validate our methods on three state-of-the-art deep FR methods, i.e., FaceNet, ArcFace, and CosFace, on two public datasets. The extensive and insightful results demonstrate our work can generate realistic adversarial relighted face images fooling FR easily, revealing the threat of specific light directions and strengths.


Author(s):  
Mubariz Zaffar ◽  
Sourav Garg ◽  
Michael Milford ◽  
Julian Kooij ◽  
David Flynn ◽  
...  

AbstractVisual place recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance conditions and viewpoint changes and with computational constraints. VPR is related to the concepts of localisation, loop closure, image retrieval and is a critical component of many autonomous navigation systems ranging from autonomous vehicles to drones and computer vision systems. While the concept of place recognition has been around for many years, VPR research has grown rapidly as a field over the past decade due to improving camera hardware and its potential for deep learning-based techniques, and has become a widely studied topic in both the computer vision and robotics communities. This growth however has led to fragmentation and a lack of standardisation in the field, especially concerning performance evaluation. Moreover, the notion of viewpoint and illumination invariance of VPR techniques has largely been assessed qualitatively and hence ambiguously in the past. In this paper, we address these gaps through a new comprehensive open-source framework for assessing the performance of VPR techniques, dubbed “VPR-Bench”. VPR-Bench (Open-sourced at: https://github.com/MubarizZaffar/VPR-Bench) introduces two much-needed capabilities for VPR researchers: firstly, it contains a benchmark of 12 fully-integrated datasets and 10 VPR techniques, and secondly, it integrates a comprehensive variation-quantified dataset for quantifying viewpoint and illumination invariance. We apply and analyse popular evaluation metrics for VPR from both the computer vision and robotics communities, and discuss how these different metrics complement and/or replace each other, depending upon the underlying applications and system requirements. Our analysis reveals that no universal SOTA VPR technique exists, since: (a) state-of-the-art (SOTA) performance is achieved by 8 out of the 10 techniques on at least one dataset, (b) SOTA technique in one community does not necessarily yield SOTA performance in the other given the differences in datasets and metrics. Furthermore, we identify key open challenges since: (c) all 10 techniques suffer greatly in perceptually-aliased and less-structured environments, (d) all techniques suffer from viewpoint variance where lateral change has less effect than 3D change, and (e) directional illumination change has more adverse effects on matching confidence than uniform illumination change. We also present detailed meta-analyses regarding the roles of varying ground-truths, platforms, application requirements and technique parameters. Finally, VPR-Bench provides a unified implementation to deploy these VPR techniques, metrics and datasets, and is extensible through templates.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
David H. Foster

AbstractSmall changes in daylight in the environment can produce large changes in reflected light, even over short intervals of time. Do these changes limit the visual recognition of surfaces by their colour? To address this question, information-theoretic methods were used to estimate computationally the maximum number of surfaces in a sample that can be identified as the same after an interval. Scene data were taken from successive hyperspectral radiance images. With no illumination change, the average number of surfaces distinguishable by colour was of the order of 10,000. But with an illumination change, the average number still identifiable declined rapidly with change duration. In one condition, the number after two minutes was around 600, after 10 min around 200, and after an hour around 70. These limits on identification are much lower than with spectral changes in daylight. No recoding of the colour signal is likely to recover surface identity lost in this uncertain environment.


Author(s):  
S. May

Abstract. Partition based clustering techniques are widely used in data mining and also to analyze hyperspectral images. Unsupervised clustering only depends on data, without any external knowledge. It creates a complete partition of the image with many classes. And so, sparse labeled samples may be used to label each cluster, and so simplify the supervised step. Each clustering algorithm has its own advantages, drawbacks (initialization, training complexity). We propose in this paper to use a recursive hierarchical clustering based on standard clustering strategies such as K-Means or Fuzzy-C-Means. The recursive hierarchical approach reduces the algorithm complexity, in order to process large amount of input pixels, and also to produce a clustering with a high number of clusters. Moreover, in hyperspectral images, a classical question is related to the high dimensionality and also to the distance that shall be used. Classical clustering algorithms usually use the Euclidean distance to compute distance between samples and centroids. We propose to implement the spectral angle distance instead and evaluate its performance. It better fits the pixel spectrums and is less sensitive to illumination change or spectrum variability inside a semantic class. Different scenes are processed with this method in order to demonstrate its potential.


2020 ◽  
Vol 117 (18) ◽  
pp. 9762-9770 ◽  
Author(s):  
Kevin Korner ◽  
Alexa S. Kuenstler ◽  
Ryan C. Hayward ◽  
Basile Audoly ◽  
Kaushik Bhattacharya

Actuation remains a significant challenge in soft robotics. Actuation by light has important advantages: Objects can be actuated from a distance, distinct frequencies can be used to actuate and control distinct modes with minimal interference, and significant power can be transmitted over long distances through corrosion-free, lightweight fiber optic cables. Photochemical processes that directly convert photons to configurational changes are particularly attractive for actuation. Various works have reported light-induced actuation with liquid crystal elastomers combined with azobenzene photochromes. We present a simple modeling framework and a series of examples that study actuation by light. Of particular interest is the generation of cyclic or periodic motion under steady illumination. We show that this emerges as a result of a coupling between light absorption and deformation. As the structure absorbs light and deforms, the conditions of illumination change, and this, in turn, changes the nature of further deformation. This coupling can be exploited in either closed structures or with structural instabilities to generate cyclic motion.


Visual tracking is the most challenging fields in the computer vision scope. Occlusion full or partial remains to be a big mile stone to achieve .This paper deals with occlusion along with illumination change, pose variation, scaling, and unexpected camera motion. This algorithm is interest point based using SURF as detector descriptor algorithm. SURF based Mean-Shift algorithm is combined with Lukas-Kanade tracker. This solves the problem of generation of online templates. These two trackers over the time rectify each other, avoiding any tracking failure. Also, Unscented Kalman Filter is used to predict the location of target if target comes under the influence of any of the above mentioned challenges. This combination makes the algorithm robust and useful when required for long tenure of tracking. This is proven by the results obtained through experiments conducted on various data sets.


Sign in / Sign up

Export Citation Format

Share Document