scholarly journals Electrophysiological and Kinematic Correlates of Communicative Intent in the Planning and Production of Pointing Gestures and Speech

2015 ◽  
Vol 27 (12) ◽  
pp. 2352-2368 ◽  
Author(s):  
David Peeters ◽  
Mingyuan Chu ◽  
Judith Holler ◽  
Peter Hagoort ◽  
Aslı Özyürek

In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.

Author(s):  
Nik Thompson ◽  
Tanya Jane McGill

This chapter discusses the domain of affective computing and reviews the area of affective tutoring systems: e-learning applications that possess the ability to detect and appropriately respond to the affective state of the learner. A significant proportion of human communication is non-verbal or implicit, and the communication of affective state provides valuable context and insights. Computers are for all intents and purposes blind to this form of communication, creating what has been described as an “affective gap.” Affective computing aims to eliminate this gap and to foster the development of a new generation of computer interfaces that emulate a more natural human-human interaction paradigm. The domain of learning is considered to be of particular note due to the complex interplay between emotions and learning. This is discussed in this chapter along with the need for new theories of learning that incorporate affect. Next, the more commonly applicable means for inferring affective state are identified and discussed. These can be broadly categorized into methods that involve the user’s input and methods that acquire the information independent of any user input. This latter category is of interest as these approaches have the potential for more natural and unobtrusive implementation, and it includes techniques such as analysis of vocal patterns, facial expressions, and physiological state. The chapter concludes with a review of prominent affective tutoring systems in current research and promotes future directions for e-learning that capitalize on the strengths of affective computing.


Gesture ◽  
2018 ◽  
Vol 17 (2) ◽  
pp. 245-267
Author(s):  
Viktoria A. Kettner ◽  
Jeremy I. M. Carpendale

Abstract Infants can extend their index fingers soon after birth, yet pointing gestures do not emerge until about 10 to 12 months. In the present study, we draw on the process-relational view, according to which pointing develops as infants learn how others respond to their initially non-communicative index finger use. We report on a longitudinal maternal diary study of 15 infants and describe four types of index finger use in the first year. Analysis of the observations suggests one possible developmental pathway: index finger extension becomes linked to infants’ attention around 7 to 9 months of age with the emergence of fingertip exploration and index finger extension towards out-of-reach objects infants wish to explore. Through parental responses infants begin to use index finger touch to refer in some situations, including asking and answering questions and to request, suggesting that some functions of pointing might originate in early index finger use.


2019 ◽  
Vol 46 (6) ◽  
pp. 1228-1237 ◽  
Author(s):  
Carina LÜKE ◽  
Juliane LEINWEBER ◽  
Ute RITTERFELD

AbstractBoth walking abilities and pointing gestures in infants are associated with later language skills. Within this longitudinal study we investigate the relationship between walk onset and first observed index-finger points and their respectively predictive value for later language skills. We assume that pointing as a motor as well as a communicative skill is a stronger predictor of later language development than walk onset. Direct observations, parent questionnaires, and standardized tests were administered in 45 children at ages 1;0, 2;0, 3;0, and 4;0. Results show that both walk onset and early index-finger pointing predict language abilities at age 2;0, but only early index-finger pointing predicts language skills at ages 3;0 and 4;0. Walk onset seems to contribute to an initial increase in language acquisition without a sustained advantage. The predictive value of first observed index-finger points, however, is strong and lasts at least until age 4;0.


2017 ◽  
Vol 60 (11) ◽  
pp. 3185-3197 ◽  
Author(s):  
Carina Lüke ◽  
Ute Ritterfeld ◽  
Angela Grimminger ◽  
Ulf Liszkowski ◽  
Katharina J. Rohlfing

Purpose This longitudinal study compared the development of hand and index-finger pointing in children with typical language development (TD) and children with language delay (LD). First, we examined whether the number and the form of pointing gestures during the second year of life are potential indicators of later LD. Second, we analyzed the influence of caregivers' gestural and verbal input on children's communicative development. Method Thirty children with TD and 10 children with LD were observed together with their primary caregivers in a seminatural setting in 5 sessions between the ages of 12 and 21 months. Language skills were assessed at 24 months. Results Compared with children with TD, children with LD used fewer index-finger points at 12 and 14 months but more pointing gestures in total at 21 months. There were no significant differences in verbal or gestural input between caregivers of children with or without LD. Conclusions Using more index-finger points at the beginning of the second year of life is associated with TD, whereas using more pointing gestures at the end of the second year of life is associated with delayed acquisition. Neither the verbal nor gestural input of caregivers accounted for differences in children's skills.


2020 ◽  
Author(s):  
Arunima Sarin ◽  
Mark K Ho ◽  
Justin Martin ◽  
Fiery Andrews Cushman

Humans use punishment to influence each other’s behavior. Many current theories presume that this operates as a simple form of incentive. In contrast, we show that people infer the communicative intent behind punishment, which can sometimes diverge sharply from its immediate incentive value. In other words, people respond to punishment not as a reward to be maximized, but as a communicative signal to be interpreted. Specifically, we show that people expect harmless, yet communicative, punishments to be as effective as harmful punishments (Experiment 1). Under some situations, people display a systematic preference for harmless punishments over more canonical, harmful punishments (Experiment 2). People readily seek out and infer the communicative message inherent in a punishment (Experiment 3). And people expect that learning from punishment depends on the ease with which its communicative intent can be inferred (Experiment 4). Taken together, these findings demonstrate that people expect punishment to be constructed and interpreted as a communicative act.


2018 ◽  
pp. 46-53
Author(s):  
Widya Pujarama ◽  
Arif Budi Prasetya

Individual adaptability towards communication media in Higher Education institutions as organizational entities could be reviewed from social semiotic perspective. Following Kress, social semiotic is a theory focuses on how semiotic resources in a varied social situation and locations become meaningful signs regulating human interaction. The theory was adapted in this research to translate patterns and activities of communication occurred in internationalization initiatives recorded in Whatsapp Group PSIK FISIP Universitas Brawijaya, as an artefact of communication. Non-participant observation was conducted towards series of 7 months conversations on Whatsapp Group as sequences of communicative act, and then analysed using Van Leeuwen’s four Dimension of Semiotic Analysis. Results indicate that there were distinction between administrative staff and lecturers with added function in the way they post their messages, indexing “doers” and “thinkers” that further conforming to their offline interaction standpoints when collaborating for internationalization activities.


2021 ◽  
Vol 8 ◽  
Author(s):  
Catharine Oertel ◽  
Patrik Jonell ◽  
Dimosthenis Kontogiorgos ◽  
Kenneth Funes Mora ◽  
Jean-Marc Odobez ◽  
...  

Listening to one another is essential to human-human interaction. In fact, we humans spend a substantial part of our day listening to other people, in private as well as in work settings. Attentive listening serves the function to gather information for oneself, but at the same time, it also signals to the speaker that he/she is being heard. To deduce whether our interlocutor is listening to us, we are relying on reading his/her nonverbal cues, very much like how we also use non-verbal cues to signal our attention. Such signaling becomes more complex when we move from dyadic to multi-party interactions. Understanding how humans use nonverbal cues in a multi-party listening context not only increases our understanding of human-human communication but also aids the development of successful human-robot interactions. This paper aims to bring together previous analyses of listener behavior analyses in human-human multi-party interaction and provide novel insights into gaze patterns between the listeners in particular. We are investigating whether the gaze patterns and feedback behavior, as observed in the human-human dialogue, are also beneficial for the perception of a robot in multi-party human-robot interaction. To answer this question, we are implementing an attentive listening system that generates multi-modal listening behavior based on our human-human analysis. We are comparing our system to a baseline system that does not differentiate between different listener types in its behavior generation. We are evaluating it in terms of the participant’s perception of the robot, his behavior as well as the perception of third-party observers.


Author(s):  
Asthararianty Asthararianty

Dromology is a speed that characterize progress. One of the affected is the culture of reading books. In the past people reading a book in the conventional manner, but in recent years, Internet technology has brought man reading a book in a different way, namely through the e-book. These changes ultimately led to a cultural shift in communication, especially in reading the book. The method used in this research is the study of literature. Results from the study showed that the reading culture (human interactions in a conventional book) has been turned into a reading culture that is synonymous with technology and also acceleration. Characteristics, sensations and experiences have changed. Technology (e-book) has become the new devices in cultured (communication / human interaction). Keywords: book, dromology, interpersonal communication, new culture


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Dimitrios Kourtis ◽  
Pierre Jacob ◽  
Natalie Sebanz ◽  
Dan Sperber ◽  
Günther Knoblich

Abstract We investigated whether communicative cues help observers to make sense of human interaction. We recorded EEG from an observer monitoring two individuals who were occasionally communicating with each other via either mutual eye contact and/or pointing gestures, and then jointly attending to the same object or attending to different objects that were placed on a table in front of them. The analyses were focussed on the processing of the interaction outcome (i.e. presence or absence of joint attention) and showed that its interpretation is a two-stage process, as reflected in the N300 and the N400 potentials. The N300 amplitude was reduced when the two individuals shared their focus of attention, which indicates the operation of a cognitive process that involves the relatively fast identification and evaluation of actor–object relationships. On the other hand, the N400 was insensitive to the sharing or distribution of the two individuals’ attentional focus. Interestingly, the N400 was reduced when the interaction outcome was preceded either by mutual eye contact or by a perceived pointing gesture. This shows that observation of communication “opens up” the mind to a wider range of action possibilities and thereby helps to interpret unusual outcomes of social interactions.


2021 ◽  
pp. 137-151
Author(s):  
Patrick G. T. Healey

The most famous grand challenge for machine intelligence is human-like communication. This chapter explores two problem that need to be solved in order for machines to meet this challenge. The first is the technical difficulties posed by ordinary conversation. Production and comprehension in conversation are: multimodal, multi-person, incremental, concurrent, and jointly managed. The fine-grained complexity of these aspects of human interaction are beyond the current state of the art but should, ultimately, be tractable. The second set of problems are foundational. Models that assume human communication is underwritten by a shared language are unable to account for the ubiuquitous and systematic role misunderstanding plays in everyday interaction. As a result they also fail to explain how people adapt their language use to each new person and new situation in real time. This capability is essential for any machine that aims to engage constructively with human diversity.


Sign in / Sign up

Export Citation Format

Share Document