Phonetic convergence during conversational interaction and speech shadowing

2015 ◽  
Vol 137 (4) ◽  
pp. 2417-2417
Author(s):  
Jennifer Pardo ◽  
Adelya Urmanche ◽  
Sherilyn Wilman ◽  
Jaclyn Wiener ◽  
Hannah Gash ◽  
...  
2018 ◽  
Vol 69 ◽  
pp. 1-11 ◽  
Author(s):  
Jennifer S. Pardo ◽  
Adelya Urmanche ◽  
Sherilyn Wilman ◽  
Jaclyn Wiener ◽  
Nicholas Mason ◽  
...  

Phonetica ◽  
2021 ◽  
Vol 78 (1) ◽  
pp. 95-112
Author(s):  
Telma Dias dos Santos ◽  
Jennifer S. Pardo ◽  
Tim Bressmann

Abstract Background: Phonetic accommodation is observed when interacting speakers gradually converge (or diverge) on phonetic features over the course of a conversation. The present experiment investigated whether gradual changes in the nasal signal levels of a pre-recorded model speaker would lead to accommodation in the nasalance scores of the interlocutor in a speech-shadowing experiment. Methods: Twenty female speakers in two groups repeated sentences after a pre-recorded model speaker whose nasal signal level was gradually increased or decreased over the course of the experiment. Outcome measures were the mean nasalance scores at the initial baseline, maximum nasal signal level, minimum nasal signal level and final baseline conditions. The order of presentation of the maximum and minimum nasal signal levels was varied between the two groups. Results: The results showed a significant effect of condition in F(3) = 2.86, p = 0.045. Both groups of participants demonstrated lower nasalance scores in response to increased nasal signal levels in the model (phonetic divergence). The group that was first presented with the maximum nasal signal levels demonstrated lower nasalance scores for the minimum nasal signal level condition (phonetic convergence). Conclusion: Speakers showed a consistent divergent reaction to a more nasal-sounding model speaker, but their response to a less nasal-sounding model may depend on the order of presentation of the manipulations. More research is needed to investigate the effects of increased versus decreased nasality in the speech of an interlocutor.


Author(s):  
Ashley Pozzolo Coote ◽  
Jane Pimentel

Purpose: Development of valid and reliable outcome tools to document social approaches to aphasia therapy and to determine best practice is imperative. The aim of this study is to determine whether the Conversational Interaction Coding Form (CICF; Pimentel & Algeo, 2009) can be applied reliably to the natural conversation of individuals with aphasia in a group setting. Method: Eleven graduate students participated in this study. During a 90-minute training session, participants reviewed and practiced coding with the CICF. Then participants independently completed the CICF using video recordings of individuals with non-fluent and fluent aphasia participating in an aphasia group. Interobserver reliability was computed using matrices representative of the point-to-point agreement or disagreement between each participant's coding and the authors' coding for each measure. Interobserver reliability was defined as 80% or better agreement for each measure. Results: On the whole, the CICF was not applied reliably to the natural conversation of individuals with aphasia in a group setting. Conclusion: In an extensive review of the turns that had high disagreement across participants, the poor reliability was attributed to inadequate rules and definitions and inexperienced coders. Further research is needed to improve the reliability of this potentially useful clinical tool.


2021 ◽  
Vol 11 (8) ◽  
pp. 996
Author(s):  
James P. Trujillo ◽  
Judith Holler

During natural conversation, people must quickly understand the meaning of what the other speaker is saying. This concerns not just the semantic content of an utterance, but also the social action (i.e., what the utterance is doing—requesting information, offering, evaluating, checking mutual understanding, etc.) that the utterance is performing. The multimodal nature of human language raises the question of whether visual signals may contribute to the rapid processing of such social actions. However, while previous research has shown that how we move reveals the intentions underlying instrumental actions, we do not know whether the intentions underlying fine-grained social actions in conversation are also revealed in our bodily movements. Using a corpus of dyadic conversations combined with manual annotation and motion tracking, we analyzed the kinematics of the torso, head, and hands during the asking of questions. Manual annotation categorized these questions into six more fine-grained social action types (i.e., request for information, other-initiated repair, understanding check, stance or sentiment, self-directed, active participation). We demonstrate, for the first time, that the kinematics of the torso, head and hands differ between some of these different social action categories based on a 900 ms time window that captures movements starting slightly prior to or within 600 ms after utterance onset. These results provide novel insights into the extent to which our intentions shape the way that we move, and provide new avenues for understanding how this phenomenon may facilitate the fast communication of meaning in conversational interaction, social action, and conversation.


2012 ◽  
Vol 40 (1) ◽  
pp. 190-197 ◽  
Author(s):  
Jennifer S. Pardo ◽  
Rachel Gibbons ◽  
Alexandra Suppes ◽  
Robert M. Krauss

2014 ◽  
Vol 25 (9) ◽  
pp. 3219-3234 ◽  
Author(s):  
Sara Bögels ◽  
Dale J. Barr ◽  
Simon Garrod ◽  
Klaus Kessler

ReCALL ◽  
2009 ◽  
Vol 21 (3) ◽  
pp. 283-301 ◽  
Author(s):  
María Moreno Jaén ◽  
Carmen Pérez Basanta

AbstractThe argument for a pedagogy of input oriented learning for the development of speaking competence (Sharwood-Smith, 1986; Bardovi-Harlig and Salsbury, 2004; Eslami-Rasekh, 2005) has been of increasing interest in Applied Linguistics circles. It has also been argued that multimedia applications, in particular DVDs, provide language learners with multimodal representations that may help them ‘to gain broad access to oral communication both visually and auditory’ (Tschirner, 2001: 305). Thus this paper focuses on an exploratory study of teaching oral interaction through input processing by means of multimodal texts.The paper is divided into a number of interconnected sections. First, we outline briefly what teaching conversation implies and examine the important role of oral comprehension in the development of conversational interaction. In fact, it has been suggested that effective speaking depends very much on successful understanding (Oprandy, 1994). In this paper we pay special attention to the crucial role of context in understanding oral interactions. Therefore, we outline the theory of context in English Language Teaching (ELT). The discussion draws on approaches to teaching conversation and it also offers a brief reflection about the need for materials which might convey the sociocultural and semiotic elements of oral communication through which meaning is created.We then discuss the decisions taken to propose a new multimodal approach to teaching conversation from a three-fold perspective: (a) the selection of texts taken from films, and the benefits of using DVDs (digital versatile disc); (b) the development of a multimodal analysis of film clips for the design of activities; and (c) the promotion of a conversation awareness methodology through a bank of DVD clips to achieve an understanding of how native speakers actually go about the process of constructing oral interactions.In sum, the main thrust of this paper is to pinpoint the advantages of using multimodal materials taken from DVDs, as they provide learners with broad access to oral communication, both visual and auditory, making classroom conditions similar to the target cultural environment (Tschirner, 2001).


Sign in / Sign up

Export Citation Format

Share Document