scholarly journals Person Re-identification by Articulated Appearance Matching

2014 ◽  
pp. 139-160 ◽  
Author(s):  
Dong Seon Cheng ◽  
Marco Cristani
Keyword(s):  
1989 ◽  
Vol 68 (5) ◽  
pp. 819-822 ◽  
Author(s):  
W.M. Johnston ◽  
E.C. Kao

Judgments of appearance matching by means of the visual criteria established by the United States Public Health Service (USPHS) and by means of an extended visual rating scale were determined for composite resin veneer restorations and their comparison teeth. Using a colorimeter of 45°/0° geometry and the CIELAB color order system, we used the color of the restorations and comparison teeth to calculate a color difference for every visual rating. Statistically significant relationships were found between each of the two visual rating systems and the color differences. The average CIELAB color difference of those ratings judged a match by the USPHS criteria was found to be 3. 7. However, the overlap in ranges of the color differences for those comparisons rated matches and mismatches indicates the importance of other factors in appearance matching, such as translucency and the effects of other surrounding visual stimuli. The extended visual rating scale offers no advantages to the more broadly defined criteria established by the USPHS.


2017 ◽  
Author(s):  
Thomas S. A. Wallis ◽  
Christina M. Funke ◽  
Alexander S. Ecker ◽  
Leon A. Gatys ◽  
Felix A. Wichmann ◽  
...  

Our visual environment is full of texture—“stuff” like cloth, bark or gravel as distinct from “things” like dresses, trees or paths—and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parameteric model of texture appearance (CNN model) that uses the features encoded by a deep convolutional neural network (VGG-19) with two other models: the venerable Portilla and Simoncelli model (PS) and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery (“parafoveal”) and when observers were able to make eye movements to all three patches (“inspection”). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler PS model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the PS model (9 compared to 4 textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesise textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.


Author(s):  
Beihua Zhang ◽  
Xiongcai Cai ◽  
Arcot Sowmya
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document