appearance modeling
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 15)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Philip Furley ◽  
Florian Klingner ◽  
Daniel Memmert

AbstractThe present research attempted to extend prior research that showed that thin-slices of pre-performance nonverbal behavior (NVB) of professional darts players gives valid information to observers about subsequent performance tendencies. Specifically, we investigated what kind of nonverbal cues were associated with success and informed thin-slice ratings. Participants (N = 61) were first asked to estimate the performance of a random sample of videos showing the preparatory NVB of professional darts players (N = 47) either performing well (470 clips) or poorly (470 clips). Preparatory NVB was assessed via preparation times and Active Appearance Modeling using Noldus FaceReader. Results showed that observers could distinguish between good and poor performance based on thin-slices of preparatory NVB (p = 0.001, d = 0.87). Further analyses showed that facial expressions prior to poor performance showed more arousal (p = 0.011, ƞ2p = 0.10), sadness (p = 0.040, ƞ2p = 0.04), and anxiety (p = 0.009, ƞ2p = 0.09) and preparation times were shorter (p = 0.001, ƞ2p = 0.36) prior to poor performance than good performance. Lens model analyses showed preparation times (p = 0.001, rho = 0.18), neutral (p = 0.001, rho = 0.13), sad (rho = 0.12), and facial expressions of arousal (p = 0.001, rho = 0.11) to be correlated with observers’ performance ratings. Hence, preparation times and facial cues associated with a player’s level of arousal, neutrality, and sadness seem to be valid nonverbal cues that observers utilize to infer information about subsequent perceptual-motor performance.


2021 ◽  
Vol 14 (3) ◽  
pp. 1-30
Author(s):  
Fabrizio Ivan Apollonio ◽  
Riccardo Foschi ◽  
Marco Gaiani ◽  
Simone Garagnani

Original hand drawings by Leonardo are astonishing collections of knowledge, superb representations of the artist's way of working, which proves the technical and cultural peak of the Renaissance era. However, due to their delicate and fragile nature, they are hard to manipulate and compulsory to preserve. To overcome this problem we developed, in a 10-year-long research program, a complete workflow to produce a system able to replace, investigate, describe and communicate ancient fine drawings through what Leonardo calls “ the best sense ” (i.e., the view), the so-called ISLe ( InSightLeonardo ). The outcoming visualization app is targeted to a wide audience made of museum visitors and, most importantly, art historians, scholars, conservators, and restorers. This article describes a specific feature of the workflow: the appearance modeling with the aim of an accurate Real-Time Rendering (RTR) visualization. This development is based on the direct observation of five among the most known Leonardo da Vinci's drawings, spanning his entire activity as a draftsman, and it is the result of an accurate analysis of drawing materials used by Leonardo, in which peculiarities of materials are digitally reproduced at the various scales exploiting solutions that favor the accuracy of perceived reproduction instead of the fidelity to the physical model and their ability to be efficiently implemented over a standard GPU-accelerated RTR pipeline. Results of the development are exemplified on five of Leonardo's drawings and multiple evaluations of the results, subjective and objective, are illustrated, aiming to assess potential and critical issues of the application.


Author(s):  
Xiangyuan Lan ◽  
Zifei Yang ◽  
Wei Zhang ◽  
Pong C. Yuen

The development of multi-spectrum image sensing technology has brought great interest in exploiting the information of multiple modalities (e.g., RGB and infrared modalities) for solving computer vision problems. In this article, we investigate how to exploit information from RGB and infrared modalities to address two important issues in visual tracking: robustness and object re-detection. Although various algorithms that attempt to exploit multi-modality information in appearance modeling have been developed, they still face challenges that mainly come from the following aspects: (1) the lack of robustness to deal with large appearance changes and dynamic background, (2) failure in re-capturing the object when tracking loss happens, and (3) difficulty in determining the reliability of different modalities. To address these issues and perform effective integration of multiple modalities, we propose a new tracking-by-detection algorithm called Adaptive Spatial-temporal Regulated Multi-Modality Correlation Filter. Particularly, an adaptive spatial-temporal regularization is imposed into the correlation filter framework in which the spatial regularization can help to suppress effect from the cluttered background while the temporal regularization enables the adaptive incorporation of historical appearance cues to deal with appearance changes. In addition, a dynamic modality weight learning algorithm is integrated into the correlation filter training, which ensures that more reliable modalities gain more importance in target tracking. Experimental results demonstrate the effectiveness of the proposed method.


2020 ◽  
Vol 98 ◽  
pp. 107059 ◽  
Author(s):  
Jizhou Ma ◽  
Shuai Li ◽  
Hong Qin ◽  
Aimin Hao

2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Wei Liu ◽  
Xin Sun ◽  
Dong Li

Abstract A robust object tracking algorithm is proposed in this paper based on an online discriminative appearance modeling mechanism. In contrast with traditional trackers whose computations cover the whole target region and may easily be polluted by the similar background pixels, we divided the target into a number of patches and take the most discriminative one as the tracking basis. With the consideration of both the photometric and spatial information, we construct a discriminative target model on it. Then, a likelihood map can be got by comparing the target model with candidate regions, on which the mean shift procedure is employed for mode seeking. Finally, we update the target model to adapt to the appearance variation. Experimental results on a number of challenging video sequences confirm that the proposed method outperforms the related state-of-the-art trackers.


Sign in / Sign up

Export Citation Format

Share Document