feature replacement
Recently Published Documents


TOTAL DOCUMENTS

7
(FIVE YEARS 2)

H-INDEX

2
(FIVE YEARS 0)

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Emilie Leblanc ◽  
Peter Washington ◽  
Maya Varma ◽  
Kaitlyn Dunlap ◽  
Yordan Penev ◽  
...  

AbstractAutism Spectrum Disorder is a neuropsychiatric condition affecting 53 million children worldwide and for which early diagnosis is critical to the outcome of behavior therapies. Machine learning applied to features manually extracted from readily accessible videos (e.g., from smartphones) has the potential to scale this diagnostic process. However, nearly unavoidable variability in video quality can lead to missing features that degrade algorithm performance. To manage this uncertainty, we evaluated the impact of missing values and feature imputation methods on two previously published autism detection classifiers, trained on standard-of-care instrument scoresheets and tested on ratings of 140 children videos from YouTube. We compare the baseline method of listwise deletion to classic univariate and multivariate techniques. We also introduce a feature replacement method that, based on a score, selects a feature from an expanded dataset to fill-in the missing value. The replacement feature selected can be identical for all records (general) or automatically adjusted to the record considered (dynamic). Our results show that general and dynamic feature replacement methods achieve a higher performance than classic univariate and multivariate methods, supporting the hypothesis that algorithmic management can maintain the fidelity of video-based diagnostics in the face of missing values and variable video quality.


Author(s):  
Yubo Zhang ◽  
Hao Tan ◽  
Mohit Bansal

Vision-and-Language Navigation (VLN) requires an agent to follow natural-language instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are crucial when the agent is navigating new environments about which it has no prior knowledge. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of the major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons for this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations that contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gaps between seen and unseen on multiple datasets (i.e. R2R, R4R, and CVDN) and achieve competitive unseen results to previous state-of-the-art models.


2014 ◽  
Vol 15 (3) ◽  
pp. 223-231 ◽  
Author(s):  
Feng-fei Zhao ◽  
Zheng Qin ◽  
Zhuo Shao ◽  
Jun Fang ◽  
Bo-yan Ren

2011 ◽  
Vol 403-408 ◽  
pp. 2958-2961
Author(s):  
Jih Pin Yeh ◽  
Chen Yu Kao ◽  
Chung Yung Chen ◽  
Hwei Jen Lin

In this study, we propose a facial feature replacement system, which uses the triangulation algorithm to perform facial feature replacement in each segmented triangular region associated with control points. The experimental results show that our system provides quite natural composite images. In addition, the system is flexible and has no limit in the shape, size, and plane rotation of the faces which are processed.


2009 ◽  
Vol 49 (4) ◽  
pp. 439-450 ◽  
Author(s):  
P.-J. Hsieh ◽  
P.U. Tse
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document