phonetic information
Recently Published Documents


TOTAL DOCUMENTS

99
(FIVE YEARS 27)

H-INDEX

15
(FIVE YEARS 3)

2021 ◽  
Vol 45 (3) ◽  
pp. 71-81
Author(s):  
Robert Skoczek ◽  
Alexandra Ebel

Orthoepie research is a traditional field at the department of Speech Science and Phonetics at Martin-Luther-University Halle-Wittenberg. After several pronunciation dictionaries, the department has now published a pronunciation database. With the establishment of the German pronunciation database (DAD), the desire for a publicly accessible reference source is met. It offers norm phonetic information on general vocabulary, as well as forms and rules of phonetical Germanization. The database can be used for various scenarios in German lessons. Continuous expansion means that further possible uses can be introduced in the future.


2021 ◽  
Author(s):  
Dong Liu ◽  
Caihuan Zhang ◽  
Yongxin Zhang ◽  
Youzhong Ma

Abstract Chinese characters are one of the logographic writing systems. There is some association between semantics and structures, shape, phonetic information of Chinese characters. In this work, multi-modal Chinese character-level embeddings are extracted, including visual features, pre-trained embeddings, shapes, and phonetic information. These embedding sequences of Chinese sentences are first fed into individual Bi-LSTM networks to capture context features, and then fused into one vector for sentiment analysis. Experimental results validate that multi-modal character-level can contribute to Chinese sentence sentiment classification. And its effect on the result is analyzed by modal features ablation test.


2020 ◽  
Author(s):  
Yen-Ju Lu ◽  
Chien-Feng Liao ◽  
Xugang Lu ◽  
Jeih-weih Hung ◽  
Yu Tsao

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Xiaochao Fan ◽  
Hongfei Lin ◽  
Liang Yang ◽  
Yufeng Diao ◽  
Chen Shen ◽  
...  

Humor refers to the quality of being amusing. With the development of artificial intelligence, humor recognition is attracting a lot of research attention. Although phonetics and ambiguity have been introduced by previous studies, existing recognition methods still lack suitable feature design for neural networks. In this paper, we illustrate that phonetics structure and ambiguity associated with confusing words need to be learned for their own representations via the neural network. Then, we propose the Phonetics and Ambiguity Comprehension Gated Attention network (PACGA) to learn phonetic structures and semantic representation for humor recognition. The PACGA model can well represent phonetic information and semantic information with ambiguous words, which is of great benefit to humor recognition. Experimental results on two public datasets demonstrate the effectiveness of our model.


Author(s):  
Laura Gwilliams ◽  
Jean-Remi King ◽  
Alec Marantz ◽  
David Poeppel

AbstractListeners experience speech as a sequence of discrete words. However, the real input is a continuously varying acoustic signal that blends words and phonemes into one another. Here we recorded two-hour magnetoencephalograms from 21 subjects listening to stories, in order to investigate how the brain concurrently solves three competing demands: 1) processing overlapping acoustic-phonetic information while 2) keeping track of the relative order of phonemic units and 3) maintaining individuated phonetic information until successful word recognition. We show that the human brain transforms speech input, roughly at the rate of phoneme duration, along a temporally-defined representational trajectory. These representations, absent from the acoustic signal, are active earlier when phonemes are predictable than when they are surprising, and are sustained until lexical ambiguity is resolved. The results reveal how phoneme sequences in natural speech are represented and how they interface with stored lexical items.One sentence summaryThe human brain keeps track of the relative order of speech sound sequences by jointly encoding content and elapsed processing time


2020 ◽  
Vol 20 ◽  
pp. 33-52
Author(s):  
Nick Posegay

MS T-S Ar.5.58 is a translation glossary from the Cairo Geniza that contains a list of Judaeo-Arabic glosses for Hebrew words from the biblical book of Samuel. These Arabic words are fully vocalised with the Tiberian Hebrew pointing system, providing more precise phonetic information about the scribe’s native Arabic dialect than could be expressed with standard Arabic vowel signs. This pointing reveals linguistic features known from modern varieties of vernacular Arabic, including a conditional tendency to raise /a/ to /e/ and a reflex of ǧīm as /g/. The manuscript can be dated between the tenth and twelfth centuries, making it an important source for the history of spoken medieval Arabic and Middle Arabic writing.


2020 ◽  
Vol 4 (2) ◽  
pp. 135-146
Author(s):  
Shengyu Zhu

Relationships between characters and words can be divided into two levels, the relationship among characters and the relationship among words – one of the core problems in Chinese philology. The relationships in Oracle Bone Inscriptions (OBIs) are very complicated due to the diversity of the configurations of graphs and the unclearness of phonetic information, which cause great difficulty in the study of these relationships, including whether characters are variants or have distinct meanings. OBIs have their own particularity, namely, a high degree of hieroglyphics. When studying the relationship between characters and words in OBIs, we are limited to actual divination materials and exploration of characters’ design intention. In this regard, relevant theories and methods in cognitive linguistics provide reference. In terms of characters, graphemes in the lao 牢-set, xian 陷-set, zhu 逐-set, kan 坎-set and che 车-set are visually distinct, but in terms of language are variants representing one word.


2020 ◽  
Vol 10 (1) ◽  
pp. 39 ◽  
Author(s):  
Tineke M. Snijders ◽  
Titia Benders ◽  
Paula Fikkert

Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech.


Sign in / Sign up

Export Citation Format

Share Document