scholarly journals Speakers exhibit a multimodal Lombard effect in noise

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
James Trujillo ◽  
Asli Özyürek ◽  
Judith Holler ◽  
Linda Drijvers

AbstractIn everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.

2020 ◽  
Author(s):  
James Trujillo ◽  
Asli ◽  
Judith Holler ◽  
Linda Drijvers

In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that 1) noise is associated with increased speech intensity and enhanced gesture kinematics, and 2) acoustic modulation of the speech signal only occurs when gestures are not present, while gesture kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where gestures carry most of the communicative burden.


1992 ◽  
Vol 11 (1) ◽  
pp. 35-52 ◽  
Author(s):  
James N. Schubert ◽  
Steven A. Peterson ◽  
Glendon Schubert ◽  
Stephen Wasby

Supreme Court oral argument (OA) is one of many face-to-face settings of political interaction. This article describes a methodology for the systematic observation and measurement of behavior in OA developed in a study of over 300 randomly selected cases from the 1969-1981 terms of the U.S. Supreme Court. Five sources of observation are integrated into the OA database at the speaking turn level of analysis: the actual text of verbal behavior; categorical behavior codes; aspects of language use and speech behavior events; electro-acoustical measurement of voice quality; and content analysis of subject matter. Preliminary data are presented to illustrate the methodology and its application to theoretical concerns of the research project.


2020 ◽  
Vol 2 (2) ◽  
pp. 174-198
Author(s):  
Xiaofei Tang

Abstract Recent research on Task-Based Language Teaching (TBLT) showed the efficacy of using computer-mediated communication (CMC) to promote second language (L2) learning (Ziegler, 2016). However, few studies compared the interactional sequences during task-based interaction across different modalities (e.g., oral and written chat). It is thus not clear how different task modalities mediate task-based interaction and L2 learning opportunities. To fill this gap, this study compared CMC written chat and face-to-face (FTF) oral chat for interactional sequences during decision-making tasks. Participants were 20 learners of Chinese (high-elementary to intermediate level) in a U.S. university. Ten participants completed the tasks in CMC, while the other 10 completed the same tasks in FTF. The interaction data were analyzed for frequency and patterns of interactional strategies. Three types of interactional sequences emerged in both groups: orientating to tasks, suggesting actions and evaluating suggestions. CMC participants suggested actions more frequently than FTF participants. While both groups predominantly agreed with proposed suggestions, CMC dyads expressed disagreement three times more than FTF dyads. CMC dyads also used more utterances to manage task progress. Findings are discussed in terms of the interactional organizations and their potential influence on task-based language use in different modalities.


Pragmatics ◽  
1994 ◽  
Vol 4 (2) ◽  
pp. 139-181 ◽  
Author(s):  
Andrew Goatly

The argument I wish to advance in this paper is that Gricean theory (Grice 1968, 1969, 1975, 1978, 1981) and, in particular, the potentially useful relevance theory which developed from it (Sperber & Wilson 1986), are flawed through their failure to consider cultural and social context; but that attempts to relate linguistic pragmatics to more socially-conscious models of language use, such as register/genre theory (Ure and Ellis 1977; Halliday 1978; Gregory and Carroll 1978; Ghadessy 1988, 1993; Swales 1988; Martin 1985, 1992 etc.) may produce interesting cross-fertilization and be beneficial to both. This essay falls into three sections. The first is a brief introductory critique of Grice's theory as an asocial idealized construct. The second section brings relevance theory and genre/register theory face to face and under the spotlight, hoping to reveal the weaknesses of each and show how, theoretically, they could compensate for and complement each other. In the third section I consider the case of metaphor, arguing that and demonstrating how the account of metaphor provided in Relevance: Communication and Cognition can be supplemented in practice by considering the kinds of register/genre in which metaphors find expression.


1999 ◽  
Vol 7 (1) ◽  
pp. 35-63 ◽  
Author(s):  
Marianne Gullberg ◽  
Kenneth Holmqvist

Since listeners usually look at the speaker's face, gestural information has to be absorbed through peripheral visual perception. In the literature, it has been suggested that listeners look at gestures under certain circumstances: 1) when the articulation of the gesture is peripheral; 2) when the speech channel is insufficient for comprehension; and 3) when the speaker him- or herself indicates that the gesture is worthy of attention. The research here reported employs eye tracking techniques to study the perception of gestures in face-to-face interaction. The improved control over the listener's visual channel allows us to test the validity of the above claims. We present preliminary findings substantiating claims 1 and 3, and relate them to theoretical proposals in the literature and to the issue of how visual and cognitive attention are related.


2021 ◽  
pp. 19-42

This chapter examines the development of ideologies about rapport within anthropology over the last ninety years. It examines rapport’s relationship with movements in anthropological thought from: observation to participation, homogenization to focuses on diversity, the denial of coevalness to the celebration of coevalness, informant to co-participant, and denotational to conational meaning. In doing so, it points to how this development has privileged ideas about positive social relations in fieldwork encounters. This chapter argues that imitations of Bronislow Malinowski’s ideas have helped construct an anthropological folk term, rapport, which was semiotically configured to include co-presence, situated language use, and warm-fuzzy social relations, while erasing much of what goes on in face-to-face encounters. This type of erasure, including the mediated nature of many such encounters, and the contexts in which they are embedded, helped inadvertently produce a focus on denotational meaning in a discipline that was all about conational meaning, that is, context.


1976 ◽  
Vol 42 (3) ◽  
pp. 879-917 ◽  
Author(s):  
Gerald D. Weeks ◽  
Alphonse Chapanis

48 two-person teams communicated through channels simulating various modes of telecommunication, teletypewriter, telephone, and closed-circuit television, and, as a control, face-to-face conversation. Each team was required to solve one of four problems. Two cooperative problems, a class scheduling and a geographic orientation problem, required the mutual exchange of factual information to reach the unique problem solution. Two conflictive problems, an issue ranking and a budget negotiation problem, were formulated to engender contention between the two team members. Performance was assessed on three classes of dependent measures: time to solution, behavioral measures of activity, and measures of verbal productivity. Additionally, the protocols and outcomes of the conflictive problem-solving sessions were examined to arrive at a measure of the degree of persuasion exhibited by the two communicators. For both kinds of problem solving, there was a sharp dichotomy in performance, on all three classes of dependent variable, between the teletypewriter mode and the other three modes all of which had a voice channel. Solutions to all problems in the voice modes were much faster but at the same time far more verbose than those in the teletypewriter mode. The addition of a visual channel to a voice mode does not appreciably decrease solution times, nor does it matter whether the visual channel is “live,” that is, face-to-face, or mediated by a closed-circuit television system. For the most part, mode effects were robust and held for all problems. The characteristics of the several modes of communication were largely independent of the kind of task assigned to the teams of subjects.


2015 ◽  
Vol 7 (4) ◽  
pp. 485-498 ◽  
Author(s):  
ELISABETH ZIMA ◽  
GEERT BRÔNE

abstractUsage-based theories hold that the sole resource for language users’ linguistic systems is language use (Barlow & Kemmer, 2000; Langacker, 1988; Tomasello, 1999, 2003). Researchers working in the usage-based paradigm, which is often equated with cognitive-functional linguistics (e.g., Ibbotson, 2013, Tomasello, 2003), seem to widely agree that the primary setting for language use is interaction, with spontaneous face-to-face interaction playing a primordial role (e.g., Bybee, 2010; Clark, 1996; Geeraerts & Cuyckens, 2007; Langacker, 2008; Oakley & Hougaard, 2008; Zlatev, 2014). It should, then, follow that usage-based models of language are not only compatible with evidence from communication research but also that they are intrinsically grounded in authentic, multi-party language use in all its diversity and complexities. This should be a logical consequence, as a usage-based understanding of language processing and human sense-making cannot be separated from the study of interaction. However, the overwhelming majority of the literature in Cognitive Linguistics (CL) does not deal with the analysis of dialogic data or with issues of interactional conceptualization. It is our firm belief that this is at odds with the interactional foundation of the usage-based hypothesis. Furthermore, we are convinced that an ‘interactional turn’ is not only essential to the credibility and further development of Cognitive Linguistics as a theory of language and cognition as such. Rather, CL-inspired perspectives on interactional language use may provide insights that other, non-cognitive approaches to discourse and interaction are bound to overlook. To that aim, this special issue brings together four contributions that involve the analysis of interactional discourse phenomena by drawing on tools and methods from the broad field of Cognitive Linguistics.


2017 ◽  
Vol 3 (s1) ◽  
Author(s):  
Elisabeth Zima ◽  
Alexander Bergs

AbstractThe meaning-making process in face-to-face interaction relies on the integration of meaningful information being conveyed by speech as well as the tone of voice, facial expressions, hand and head gestures, body postures and movements (McNeill 1992; Kendon 2004). Hence, it is inherently multimodal. Usage-based linguistics attributes language use a fundamental role in linguistic theorizing by positing that the language system is grounded in and abstracted from (multimodal) language use. However, despite this inherent epistemological link, usage-based linguists have hitherto conceptualized language as a system of interconnected verbal, i. e. monomodal units, leaving nonverbal usage aspects and the question of their potential entrenchment as part of language largely out of the picture.This is – at least at first sight – surprising because the usage-based model of Construction Grammar (C × G) seems particularly well-equipped to unite the natural interest of linguists in the units that define language systems with the multimodality of language use. Constructions are conceptualized as holistic “conventionalized clusters of features (syntactic, prosodic, pragmatic, semantic, textual, etc.) that recur as further indivisible associations between form and meaning” (Fried 2015: 974). Given its conceptual openess to all levels of usage features, several studies have recently argued for the need to open up the current focus of C × G towards kinesic recurrences (Günthner & Imo 2006; Deppermann 2011; Deppermann & Proske 2015; Andrén 2010; Schoonjans 2014; Schoonjans et al. 2015; Steen & Turner 2013; Zima 2014a; Zima 2014b, in press; Cienki 2012; Cienki 2015; Mittelberg 2014; Müller & Bressem 2014; Bergs 2015; Valenzuela 2015). Departing from the usage-based foundation of C × G which takes “grammar to be the cognitive organization of one’s experience with language” (Bybee 2006: 219), these studies suggest that the basic units of language, i. e. constructions, may be multimodal in nature.This paper presents some of the current issues for a Multimodal Construction Grammar. The aim is to frame the debate and to briefly summarize some of the discussion’s key issues. The individual papers in the special issue elaborate in more detail on particular points of discussion and/or present empirical case studies.


2015 ◽  
Vol 7 (4) ◽  
pp. 546-562 ◽  
Author(s):  
BERT OBEN ◽  
GEERT BRÔNE

abstractInteractive language use inherently involves a process of coordination, which often leads to matching behaviour between interlocutors in different semiotic channels. We study this process of interactive alignment from a multimodal perspective: using data from head-mounted eye-trackers in a corpus of face-to-face conversations, we measure which effect gaze fixations by speakers (on their own gestures, condition 1) and fixations by interlocutors (on the gestures by those speakers, condition 2) have on subsequent gesture production by those interlocutors. The results show there is a significant effect of interlocutor gaze (condition 2), but not of speaker gaze (condition 1) on the amount of gestural alignment, with an interaction between the conditions.


Sign in / Sign up

Export Citation Format

Share Document