Speech Feature Analysis Using Temporal Linear Embedding

Author(s):  
Lifang Xue ◽  
XiusHuang Yi
2014 ◽  
Vol 1014 ◽  
pp. 375-378 ◽  
Author(s):  
Ri Sheng Huang

To improve effectively the performance on speech emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech feature data lying on a nonlinear manifold embedded in high-dimensional acoustic space. This paper proposes an improved SLLE algorithm, which enhances the discriminating power of low-dimensional embedded data and possesses the optimal generalization ability. The proposed algorithm is used to conduct nonlinear dimensionality reduction for 48-dimensional speech emotional feature data including prosody so as to recognize three emotions including anger, joy and neutral. Experimental results on the natural speech emotional database demonstrate that the proposed algorithm obtains the highest accuracy of 90.97% with only less 9 embedded features, making 11.64% improvement over SLLE algorithm.


2011 ◽  
Vol 121-126 ◽  
pp. 720-724
Author(s):  
Liang Liang Wang ◽  
Zhi Yong Li ◽  
Ji Xiang Sun

The local linear embedding algorithm(LLE) is applied into the anomaly detection algorithm on the basis of the feature analysis of the hyperspectral data. Then, to deal with the problem of declining capacity of identifying the neighborhood caused by the Euclidean distance, an improved LLE algorithm is developed. The improved LLE algorithm selects neighborhood pixels according to the spectral gradient, thus making the anomaly detection more robust to the changes of light and terrain. Experimental results prove the feasibility of using LLE algorithm to solve the anomaly detection problem, and the effectiveness of the algorithm in improving the detection performance.


2021 ◽  
Vol 89 (9) ◽  
pp. S130
Author(s):  
Danielle DeSouza ◽  
Mengdan Xu ◽  
Celia Fidalgo ◽  
Jessica Robin ◽  
William Simpson

2003 ◽  
Vol 10 (5) ◽  
pp. 137-140 ◽  
Author(s):  
Oh-Wook Kwon ◽  
Kwokleung Chan ◽  
Te-Won Lee

2019 ◽  
Vol 62 (12) ◽  
pp. 4464-4482 ◽  
Author(s):  
Diane L. Kendall ◽  
Megan Oelke Moldestad ◽  
Wesley Allen ◽  
Janaki Torrence ◽  
Stephen E. Nadeau

Purpose The ultimate goal of anomia treatment should be to achieve gains in exemplars trained in the therapy session, as well as generalization to untrained exemplars and contexts. The purpose of this study was to test the efficacy of phonomotor treatment, a treatment focusing on enhancement of phonological sequence knowledge, against semantic feature analysis (SFA), a lexical-semantic therapy that focuses on enhancement of semantic knowledge and is well known and commonly used to treat anomia in aphasia. Method In a between-groups randomized controlled trial, 58 persons with aphasia characterized by anomia and phonological dysfunction were randomized to receive 56–60 hr of intensively delivered treatment over 6 weeks with testing pretreatment, posttreatment, and 3 months posttreatment termination. Results There was no significant between-groups difference on the primary outcome measure (untrained nouns phonologically and semantically unrelated to each treatment) at 3 months posttreatment. Significant within-group immediately posttreatment acquisition effects for confrontation naming and response latency were observed for both groups. Treatment-specific generalization effects for confrontation naming were observed for both groups immediately and 3 months posttreatment; a significant decrease in response latency was observed at both time points for the SFA group only. Finally, significant within-group differences on the Comprehensive Aphasia Test–Disability Questionnaire ( Swinburn, Porter, & Howard, 2004 ) were observed both immediately and 3 months posttreatment for the SFA group, and significant within-group differences on the Functional Outcome Questionnaire ( Glueckauf et al., 2003 ) were found for both treatment groups 3 months posttreatment. Discussion Our results are consistent with those of prior studies that have shown that SFA treatment and phonomotor treatment generalize to untrained words that share features (semantic or phonological sequence, respectively) with the training set. However, they show that there is no significant generalization to untrained words that do not share semantic features or phonological sequence features.


Sign in / Sign up

Export Citation Format

Share Document