short utterances
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 31)

H-INDEX

9
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Mandana Fasounaki ◽  
Emirhan Burak Yuce ◽  
Serkan Oncul ◽  
Gokhan Ince

2021 ◽  
Vol 12 ◽  
Author(s):  
Peter A. Krause ◽  
Alan H. Kawamoto

In natural conversation, turns are handed off quickly, with the mean downtime commonly ranging from 7 to 423 ms. To achieve this, speakers plan their upcoming speech as their partner’s turn unfolds, holding the audible utterance in abeyance until socially appropriate. The role played by prediction is debated, with some researchers claiming that speakers predict upcoming speech opportunities, and others claiming that speakers wait for detection of turn-final cues. The dynamics of articulatory triggering may speak to this debate. It is often assumed that the prepared utterance is held in a response buffer and then initiated all at once. This assumption is consistent with standard phonetic models in which articulatory actions must follow tightly prescribed patterns of coordination. This assumption has recently been challenged by single-word production experiments in which participants partly positioned their articulators to anticipate upcoming utterances, long before starting the acoustic response. The present study considered whether similar anticipatory postures arise when speakers in conversation await their next opportunity to speak. We analyzed a pre-existing audiovisual database of dyads engaging in unstructured conversation. Video motion tracking was used to determine speakers’ lip areas over time. When utterance-initial syllables began with labial consonants or included rounded vowels, speakers produced distinctly smaller lip areas (compared to other utterances), prior to audible speech. This effect was moderated by the number of words in the upcoming utterance; postures arose up to 3,000 ms before acoustic onset for short utterances of 1–3 words. We discuss the implications for models of conversation and phonetic control.


2021 ◽  
Vol 1 (194) ◽  
pp. 222-225
Author(s):  
Теtiana Rybak ◽  
◽  
Іnesa Lazarenko ◽  
Оlena Svysiuk ◽  
◽  
...  

Teaching oral monological speech to students of non-language higher educational institutions has always been a point of professional interest of methodologists as a complicated practical task. In the conditions of COVID-19 pandemic its implementation has become complicated due to impossibility of immediate interaction between academics and students. Thus, a prerequisite for raising the effectiveness of instruction is the expedient application of web-technologies. WebQuests are defined as an inquiry-oriented lesson format in which most or all the information that learners work with comes from the web. WebQuests are projects that raise students' motivation, develop their critical and creative thinking, as well as communicative and social skills. In their turn, compilation WebQuests are defined as tasks where students are supposed to take information from a number of sources and put it into a common format (like: a cookbook, a booklet of an exhibition etc.). The resulted texts are creative products that can be presented orally or in written form. Besides, while doing Compilation WebQuests students are supposed to read or listen to information which makes them a universal method and means of developing communicative skills. The methodological prerequisites of developing students' oral monological skills by means of Compilation WebQuests include: the choice of the methodological approach, the instructional stages, exercises and aids. The current research employs the inductive approach of teaching oral monological speech and, consequently, 3 stages that it involves: the stage of creating short utterances on the topic, the stage of their expansion by means of verbal and non-verbal aids, and the stage of their independent creation. The aids that can be used include: texts, outlines, substitution tables, structural schemes, role-cards, key-cards, proverbs and sayings, functional models and cards, audios and videos, illustrations (photos, posters and caricatures), functional noises etc. The article suggests assignments structured according to the stages described above and including the aids mentioned, that can make up the basis of a Compilation WebQuest's Process and Resources section.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Cristina Jara ◽  
Cristóbal Moënne-Loccoz ◽  
Marcela Peña

AbstractBefore the 6-months of age, infants succeed to learn words associated with objects and actions when the words are presented isolated or embedded in short utterances. It remains unclear whether such type of learning occurs from fluent audiovisual stimuli, although in natural environments the fluent audiovisual contexts are the default. In 4 experiments, we evaluated if 8-month-old infants could learn word-action and word-object associations from fluent audiovisual streams when the words conveyed either vowel or consonant harmony, two phonological cues that benefit word learning near 6 and 12 months of age, respectively. We found that infants learned both types of words, but only when the words contained vowel harmony. Because object- and action-words have been conceived as rudimentary representations of nouns and verbs, our results suggest that vowels contribute to shape the initial steps of the learning of lexical categories in preverbal infants.


IZUMI ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 186-199
Author(s):  
Iantika Humanjadna Dityandari ◽  
Bayu Aryanto

This study is intended to describe the form of aizuchi in the TV series Inaka Ni Tomarou! and find out the function of Aizuchi's speech based on the conversation context. The research data are the forms of aizuchi, which are used in a conversational context. This type of research is a qualitative descriptive study. The researchers found six forms of aizuchi: short speech, interjection speech, interjection, and short utterances, repeated short utterances, repeated speech partner utterance, short utterances, and repetition of speech partners. In the function, Aizuchi has seven functions: a continuer signal, an understanding signal, an approval signal, a signal indicating emotion, a signal to confirm, a rejection signal, and a filling signal.


2020 ◽  
Author(s):  
Aleksei Gusev ◽  
Vladimir Volokhov ◽  
Tseren Andzhukaev ◽  
Sergey Novoselov ◽  
Galina Lavrentyeva ◽  
...  

2020 ◽  
Author(s):  
Seung-bin Kim ◽  
Jee-weon Jung ◽  
Hye-jin Shim ◽  
Ju-ho Kim ◽  
Ha-Jin Yu

Author(s):  
O. Mamyrbayev, ◽  
◽  
A. Akhmediyarov, ◽  
A. Kydyrbekov, ◽  
N. Mekebayev, ◽  
...  

Text-independent voice recognition of the user using short sentences is a very difficult task due to the large spread and inconsistency of the content between short sentences, in order to improve user recognition by voice, it is planned to highlight several sets of distinguishing features that contain more information related to the voice. The results show that the i-vector DNN system is superior to the GMM i-vector system for various durations. However, the characteristics of both systems deteriorate significantly as the duration of the sentences decreases. To solve this problem, we propose two new nonlinear mapping methods that train DNN models to map i-vectors extracted from short sentences to their corresponding i-vectors of long sentences.


Sign in / Sign up

Export Citation Format

Share Document