scholarly journals The origin of human multi-modal communication

2014 ◽  
Vol 369 (1651) ◽  
pp. 20130302 ◽  
Author(s):  
Stephen C. Levinson ◽  
Judith Holler

One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins—especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the ‘gesture-first hypothesis’ with that of gesture and speech having evolved together, hand in hand—or hand in mouth, rather—as one system.

2014 ◽  
Vol 22 (2) ◽  
pp. 244-263 ◽  
Author(s):  
Nicolas Fay ◽  
Mark Ellison ◽  
Simon Garrod

This paper explores the role of iconicity in spoken language and other human communication systems. First, we concentrate on graphical and gestural communication and show how semantically motivated iconic signs play an important role in creating such communication systems from scratch. We then consider how iconic signs tend to become simplified and symbolic as the communication system matures and argue that this process is driven by repeated interactive use of the signs. We then consider evidence for iconicity at the level of the system in graphical communication and finally draw comparisons between iconicity in graphical and gestural communication systems and in spoken language.


2017 ◽  
Vol 18 (3) ◽  
pp. 314-329 ◽  
Author(s):  
Casey J. Lister ◽  
Nicolas Fay

Following a synthesis of naturalistic and experimental studies of language creation, we propose a theoretical model that describes the process through which human communication systems might arise and evolve. Three key processes are proposed that give rise to effective, efficient and shared human communication systems: (1) motivated signs that directly resemble their meaning facilitate cognitive alignment, improving communication success; (2) behavioral alignment onto an inventory of shared sign-to-meaning mappings bolsters cognitive alignment between interacting partners; (3) sign refinement, through interactive feedback, enhances the efficiency of the evolving communication system. By integrating the findings across a range of diverse studies, we propose a theoretical model of the process through which the earliest human communication systems might have arisen and evolved. Importantly, because our model is not bound to a single modality it can describe the creation of shared sign systems across a range of contexts, informing theories of language creation and evolution.


Author(s):  
Rui P. Chaves ◽  
Michael T. Putnam

This book is about one of the most intriguing features of human communication systems: the fact that words which go together in meaning can occur arbitrarily far away from each other. The kind of long-distance dependency that this volume is concerned with has been the subject of intense linguistic and psycholinguistic research for the last half century, and offers a unique insight into the nature of grammatical structures and their interaction with cognition. The constructions in which these unbounded dependencies arise are difficult to model and come with a rather puzzling array of constraints which have defied characterization and a proper explanation. For example, there are filler-gap dependencies in which the filler phrase is a plural phrase formed from the combination of each of the extracted phrases, and there are filler-gap constructions in which the filler phrase itself contains a gap that is linked to another filler phrase. What is more, different types of filler-gap dependency can compound, in the same sentence. Conversely, not all kinds of filler-gap dependencies are equally licit; some are robustly ruled out by the grammar whereas others have a less clear status because they have graded acceptability and can be made to improve in ideal contexts and conditions. This work provides a detailed survey of these linguistic phenomena and extant accounts, while also incorporating new experimental evidence to shed light on why the phenomena are the way they are and what important research on this topic lies ahead.


2007 ◽  
Vol 8 (1) ◽  
pp. 159-175 ◽  
Author(s):  
John L. Locke

It has long been asserted that the evolutionary path to spoken language was paved by manual–gestural behaviors, a claim that has been revitalized in response to recent research on mirror neurons. Renewed interest in the relationship between manual and vocal behavior draws attention to its development. Here, the pointing and vocalization of 16.5-month-old infants are reported as a function of the context in which they occurred. When infants operated in a referential mode, the frequency of simultaneous vocalization and pointing exceeded the frequency of vocalization-only and pointing-only responses by a wide margin. In a non-communicative context, combinatorial effects persisted, but in weaker form. Manual–vocal signals thus appear to express the operation of an integrated system, arguably adaptive in the young from evolutionary times to the present. It was speculated, based on reported evidence, that manual behavior increases the frequency and complexity of vocal behaviors in modern infants. There may be merit in the claim that manual behavior facilitated the evolution of language because it helped make available, early in development, behaviors that under selection pressures in later ontogenetic stages elaborated into speech.


2020 ◽  
pp. 026553222095150
Author(s):  
Aaron Olaf Batty

Nonverbal and other visual cues are well established as a critical component of human communication. Under most circumstances, visual information is available to aid in the comprehension and interpretation of spoken language. Citing these facts, many L2 assessment researchers have studied video-mediated listening tests through score comparisons with audio tests, by measuring the amount of time spent watching, and by attempting to determine examinee viewing behavior through self-reports. However, the specific visual cues to which examinees attend have heretofore not been measured objectively. The present research employs eye-tracking methodology to determine the amounts of time 12 participants viewed specific visual cues on a six-item, video-mediated L2 listening test. Seventy-two scanpath-overlaid videos of viewing behavior were manually coded for visual cues at 0.10-second intervals. Cued retrospective interviews based on eye-tracking data provided reasons for the observed behaviors. Faces were found to occupy the majority (81.74%) of visual dwell time, with participants largely splitting their time between the speaker’s eyes and mouth. Detected gesture viewing was negligible. The reason given for most viewing behavior was determining characters’ emotional states. These findings suggest that the primary difference between audio- and video-mediated L2 listening tests of conversational content is the absence or presence of facial expressions.


1996 ◽  
Vol 1 (1) ◽  
pp. 121-130
Author(s):  
Henry S. Thompson

An overview is given of work on the creation, collection, preparation, and publication of electronic corpora of written and spoken language undertaken at the Human Communication Research Centre at the Universities of Edinburgh and Glasgow. Four major efforts are described: the HCRC Map Task Corpus, the ECI/MC1, the MLCC project and work on document architectures and processing regimes for SGML-encoded corpora.


2010 ◽  
Vol 34 (3) ◽  
pp. 351-386 ◽  
Author(s):  
Nicolas Fay ◽  
Simon Garrod ◽  
Leo Roberts ◽  
Nik Swoboda

2021 ◽  
Vol 12 ◽  
Author(s):  
Irene M. Pepperberg

Deciphering nonhuman communication – particularly nonhuman vocal communication – has been a longstanding human quest. We are, for example, fascinated by the songs of birds and whales, the grunts of apes, the barks of dogs, and the croaks of frogs; we wonder about their potential meaning and their relationship to human language. Do these utterances express little more than emotional states, or do they convey actual bits and bytes of concrete information? Humans’ numerous attempts to decipher nonhuman systems have, however, progressed slowly. We still wonder why only a small number of species are capable of vocal learning, a trait that, because it allows for innovation and adaptation, would seem to be a prerequisite for most language-like abilities. Humans have also attempted to teach nonhumans elements of our system, using both vocal and nonvocal systems. The rationale for such training is that the extent of success in instilling symbolic reference provides some evidence for, at the very least, the cognitive underpinnings of parallels between human and nonhuman communication systems. However, separating acquisition of reference from simple object-label association is not a simple matter, as reference begins with such associations, and the point at which true reference emerges is not always obvious. I begin by discussing these points and questions, predominantly from the viewpoint of someone studying avian abilities. I end by examining the question posed by Premack: do nonhumans that have achieved some level of symbolic reference then process information differently from those that have not? I suggest the answer is likely “yes,” giving examples from my research on Grey parrots (Psittacus erithacus).


Sign in / Sign up

Export Citation Format

Share Document