Reference set based appearance model for tracking across non-overlapping cameras

Author(s):  
Xiaojing Chen ◽  
Le An ◽  
Bir Bhanu
2019 ◽  
Vol 2019 (1) ◽  
pp. 320-325 ◽  
Author(s):  
Wenyu Bao ◽  
Minchen Wei

Great efforts have been made to develop color appearance models to predict color appearance of stimuli under various viewing conditions. CIECAM02, the most widely used color appearance model, and many other color appearance models were all developed based on corresponding color datasets, including LUTCHI data. Though the effect of adapting light level on color appearance, which is known as "Hunt Effect", is well known, most of the corresponding color datasets were collected within a limited range of light levels (i.e., below 700 cd/m2), which was much lower than that under daylight. A recent study investigating color preference of an artwork under various light levels from 20 to 15000 lx suggested that the existing color appearance models may not accurately characterize the color appearance of stimuli under extremely high light levels, based on the assumption that the same preference judgements were due to the same color appearance. This article reports a psychophysical study, which was designed to directly collect corresponding colors under two light levels— 100 and 3000 cd/m2 (i.e., ≈ 314 and 9420 lx). Human observers completed haploscopic color matching for four color stimuli (i.e., red, green, blue, and yellow) under the two light levels at 2700 or 6500 K. Though the Hunt Effect was supported by the results, CIECAM02 was found to have large errors under the extremely high light levels, especially when the CCT was low.


2019 ◽  
Vol 2019 (1) ◽  
pp. 243-246
Author(s):  
Muhammad Safdar ◽  
Noémie Pozzera ◽  
Jon Yngve Hardeberg

A perceptual study was conducted to enhance colour image quality in terms of naturalness and preference using perceptual scales of saturation and vividness. Saturation scale has been extensively used for this purpose while vividness has been little used. We used perceptual scales of a recently developed colour appearance model based on Jzazbz uniform colour space. A two-fold aim of the study was (i) to test performance of recently developed perceptual scales of saturation and vividness compared with previously used hypothetical models and (ii) to compare performance and chose one of saturation and vividness scales for colour image enhancement in future. Test images were first transformed to Jzazbz colour space and their saturation and vividness were then decreased or increased to obtain 6 different variants of the image. Categorical judgment method was used to judge preference and naturalness of different variants of the test images and results are reported.


Heritage ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 188-197
Author(s):  
Dorukalp Durmus

Light causes damage when it is absorbed by sensitive artwork, such as oil paintings. However, light is needed to initiate vision and display artwork. The dilemma between visibility and damage, coupled with the inverse relationship between color quality and energy efficiency, poses a challenge for curators, conservators, and lighting designers in identifying optimal light sources. Multi-primary LEDs can provide great flexibility in terms of color quality, damage reduction, and energy efficiency for artwork illumination. However, there are no established metrics that quantify the output variability or highlight the trade-offs between different metrics. Here, various metrics related to museum lighting (damage, the color quality of paintings, illuminance, luminous efficacy of radiation) are analyzed using a voxelated 3-D volume. The continuous data in each dimension of the 3-D volume are converted to discrete data by identifying a significant minimum value (unit voxel). Resulting discretized 3-D volumes display the trade-offs between selected measures. It is possible to quantify the volume of the graph by summing unique voxels, which enables comparison of the performance of different light sources. The proposed representation model can be used for individual pigments or paintings with numerous pigments. The proposed method can be the foundation of a damage appearance model (DAM).


Author(s):  
Hongyang Yu ◽  
Guorong Li ◽  
Weigang Zhang ◽  
Hongxun Yao ◽  
Qingming Huang

2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Hong-Min Zhu ◽  
Chi-Man Pun

We propose an adaptive and robust superpixel based hand gesture tracking system, in which hand gestures drawn in free air are recognized from their motion trajectories. First we employed the motion detection of superpixels and unsupervised image segmentation to detect the moving target hand using the first few frames of the input video sequence. Then the hand appearance model is constructed from its surrounding superpixels. By incorporating the failure recovery and template matching in the tracking process, the target hand is tracked by an adaptive superpixel based tracking algorithm, where the problem of hand deformation, view-dependent appearance invariance, fast motion, and background confusion can be well handled to extract the correct hand motion trajectory. Finally, the hand gesture is recognized by the extracted motion trajectory with a trained SVM classifier. Experimental results show that our proposed system can achieve better performance compared to the existing state-of-the-art methods with the recognition accuracy 99.17% for easy set and 98.57 for hard set.


2014 ◽  
Vol 144 ◽  
pp. 581-595 ◽  
Author(s):  
Jia Yan ◽  
Xi Chen ◽  
Dexiang Deng ◽  
Qiuping Zhu

Author(s):  
Tianyang Xu ◽  
Zhenhua Feng ◽  
Xiao-Jun Wu ◽  
Josef Kittler

AbstractDiscriminative Correlation Filters (DCF) have been shown to achieve impressive performance in visual object tracking. However, existing DCF-based trackers rely heavily on learning regularised appearance models from invariant image feature representations. To further improve the performance of DCF in accuracy and provide a parsimonious model from the attribute perspective, we propose to gauge the relevance of multi-channel features for the purpose of channel selection. This is achieved by assessing the information conveyed by the features of each channel as a group, using an adaptive group elastic net inducing independent sparsity and temporal smoothness on the DCF solution. The robustness and stability of the learned appearance model are significantly enhanced by the proposed method as the process of channel selection performs implicit spatial regularisation. We use the augmented Lagrangian method to optimise the discriminative filters efficiently. The experimental results obtained on a number of well-known benchmarking datasets demonstrate the effectiveness and stability of the proposed method. A superior performance over the state-of-the-art trackers is achieved using less than $$10\%$$ 10 % deep feature channels.


Sign in / Sign up

Export Citation Format

Share Document