High performance word level sequential and parallel coding methods and architectures for bit plane coding

2008 ◽  
Vol 51 (4) ◽  
pp. 337-351
Author(s):  
ChengYi Xiong ◽  
JinWen Tian ◽  
Jian Liu
Author(s):  
Hong-Yu Chao ◽  
Jia-Shung Wang ◽  
Juin-Long Lin ◽  
Kai-Chao Yang ◽  
Chien-Ming Wu ◽  
...  

2021 ◽  
pp. 1-1
Author(s):  
Fangping Ye ◽  
Navid Mahmoudian Bidgoli ◽  
Elsa Dupraz ◽  
Aline Roumy ◽  
Karine Amis ◽  
...  

Author(s):  
Anton Batliner ◽  
Bernd Möbius

Automatic speech processing (ASP) is understood as covering word recognition, the processing of higher linguistic components (syntax, semantics, and pragmatics), and the processing of computational paralinguistics (CP), which deals with speaker states and traits. This chapter attempts to track the role of prosody in ASP from the word level up to CP. A short history of the field from 1980 to 2020 distinguishes the early years (until 2000)—when the prosodic contribution to the modelling of linguistic phenomena, such as accents, boundaries, syntax, semantics, and dialogue acts, was the focus—from the later years, when the focus shifted to paralinguistics; prosody ceased to be visible. Different types of predictor variables are addressed, among them high-performance power features as well as leverage features, which can also be employed in teaching and therapy.


1996 ◽  
Vol 32 (19) ◽  
pp. 1773 ◽  
Author(s):  
K. Nguyen-Phi ◽  
H. Weinrichter

Author(s):  
SAVAŞ ÖZKAN ◽  
AKIN ÖZKAN

Determining the category of a text document from its semantic content is highly motivated in the literature and it has been extensively studied in various applications. Also, the compact representation of the text is a funda- mental step in achieving precise results for the applications and the studies are generously concentrated to improve its performance. In particular, the studies which exploit the aggregation of word-level representations are the mainstream techniques used in the problem. In this paper, we tackle text representation to achieve high performance in different text classification tasks. Throughout the paper, three critical contributions are presented. First, to encode the word- level representations for each text, we adapt a trainable orderless aggregation algorithm to obtain a more discriminative abstract representation by transforming word vectors to the text-level representation. Second, we propose an effective term-weighting scheme to compute the relative importance of words from the context based on their conjunction with the problem in an end-to-end learning manner. Third, we present a weighted loss function to mitigate the class-imbalance problem between the categories. To evaluate the performance, we collect two distinct datasets as Turkish parliament records (i.e. written speeches of four major political parties including 30731/7683 train and test documents) and newspa- per articles (i.e. daily articles of the columnists including 16000/3200 train and test documents) whose data is available on the web. From the results, the proposed method introduces significant performance improvements to the baseline techniques (i.e. VLAD and Fisher Vector) and achieves 0.823% and 0.878% true prediction accuracies for the party membership and the estimation of the category of articles respectively. The performance validates that the proposed con- tributions (i.e. trainable word-encoding model, trainable term-weighting scheme and weighted loss function) significantly outperform the baselines.


Sign in / Sign up

Export Citation Format

Share Document