scholarly journals The predictive potential of hand gestures during conversation: An investigation of the timing of gestures in relation to speech

Author(s):  
Marlijn ter Bekke ◽  
Linda Drijvers ◽  
JUDITH HOLLER

In face-to-face conversation, recipients might use the bodily movements of the speaker (e.g. gestures) to facilitate language processing. It has been suggested that one way through which this facilitation may happen is prediction. However, for this to be possible, gestures would need to precede speech, and it is unclear whether this is true during natural conversation. In a corpus of Dutch conversations, we annotated hand gestures that represent semantic information and occurred during questions, and the word(s) which corresponded most closely to the gesturally depicted meaning. Thus, we tested whether representational gestures temporally precede their lexical affiliates. Further, to see whether preceding gestures may indeed facilitate language processing, we asked whether the gesture-speech asynchrony predicts the response time to the question the gesture is part of. Gestures and their strokes (most meaningful movement component) indeed preceded the corresponding lexical information, thus demonstrating their predictive potential. However, while questions with gestures got faster responses than questions without, there was no evidence that questions with larger gesture-speech asynchronies get faster responses. These results suggest that gestures indeed have the potential to facilitate predictive language processing, but further analyses on larger datasets are needed to test for links between asynchrony and processing advantages.

Author(s):  
Asli Özyürek

Use of language in face-to-face context is multimodal. Production and perception of speech take place in the context of visual articulators such as lips, face, or hand gestures which convey relevant information to what is expressed in speech at different levels of language. While lips convey information at the phonological level, gestures contribute to semantic, pragmatic, and syntactic information, as well as to discourse cohesion. This chapter overviews recent findings showing that speech and gesture (e.g. a drinking gesture as someone says, “Would you like a drink?”) interact during production and comprehension of language at the behavioral, cognitive, and neural levels. Implications of these findings for current psycholinguistic theories and how they can be expanded to consider the multimodal context of language processing are discussed.


2021 ◽  
Author(s):  
Jiaoyan Chen ◽  
Pan Hu ◽  
Ernesto Jimenez-Ruiz ◽  
Ole Magnus Holter ◽  
Denvar Antonyrajah ◽  
...  

AbstractSemantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named , which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, often significantly outperforms the state-of-the-art methods in our experiments.


2016 ◽  
Vol 20 (5) ◽  
pp. 917-930 ◽  
Author(s):  
ASTER DIJKGRAAF ◽  
ROBERT J. HARTSUIKER ◽  
WOUTER DUYCK

Monolingual listeners continuously predict upcoming information. Here, we tested whether predictive language processing occurs to the same extent when bilinguals listen to their native language vs. a non-native language. Additionally, we tested whether bilinguals use prediction to the same extent as monolinguals. Dutch–English bilinguals and English monolinguals listened to constraining and neutral sentences in Dutch (bilinguals only) and in English, and viewed target and distractor pictures on a display while their eye movements were measured. There was a bias of fixations towards the target object in the constraining condition, relative to the neutral condition, before information from the target word could affect fixations. This prediction effect occurred to the same extent in native processing by bilinguals and monolinguals, but also in non-native processing. This indicates that unbalanced, proficient bilinguals can quickly use semantic information during listening to predict upcoming referents to the same extent in both of their languages.


2020 ◽  
Vol 5 (1) ◽  
pp. 431
Author(s):  
Chelsea Sanker

This work presents a perceptual study on how acoustic details and knowledge of the lexicon influence discrimination decisions. English-speaking listeners were less likely to identify phonologically matching items as the same when they differed in vowel duration, but differences in mean F0 did not have an effect. Although both are components of English contrasts, the results only provide evidence for attention to vowel duration as a potentially contrastive cue. Lexical ambiguity was a predictor of response time. Pairs with matching duration were identified more quickly than pairs with distinct duration, but only among lexically ambiguous items, indicating that lexical ambiguity mediates attention to acoustic detail. Lexical ambiguity also interacted with neighborhood density: Among lexically unambiguous words, the proportion of 'same' responses decreased with neighborhood density, but there was no effect among lexically ambiguous words. This interaction suggests that evaluating phonological similarity depends more on lexical information when the items are lexically unambiguous.


Cognition ◽  
2022 ◽  
Vol 221 ◽  
pp. 104988
Author(s):  
Duygu Özge ◽  
Jaklin Kornfilt ◽  
Katja Maquate ◽  
Aylin C. Küntay ◽  
Jesse Snedeker

Author(s):  
Verica Buchanan ◽  
Ashley R. Chinzi ◽  
Nicholas C. Day ◽  
Lidija A. Buchanan ◽  
Rachel Specht ◽  
...  

Inception has been proposed as a means to protect our cyber domain. In order to fully take advantage of this strategy we must first understand deception from the human point of view, because it is the human cyber attacker that plans and orchestrates cyberattacks. Moreover, although various deceptive tactics are addressed in the cyber-security literature, it appears they are categorized more from the standpoint of technology than from their behavioral origins. In order to better understand the interplay between attacker and defender, and associated cues of deception, we abstracted the cyber deception task. Participants played a modified version of Battleship either face-to-face or with a divider. Deception was significantly higher in the divider condition. Additionally, participants used patterns of deception analogous to cyber attackers and defenders such as blatant lies, diversion, and honeypots. An array of behavioral cues were also observed when participants lied and included variations in tone of voice, less eye contact, lower response time, and other physical indicators. Implications and future projects are discussed.


2020 ◽  
Vol 34 (05) ◽  
pp. 8074-8081
Author(s):  
Pavan Kapanipathi ◽  
Veronika Thost ◽  
Siva Sankalp Patel ◽  
Spencer Whitehead ◽  
Ibrahim Abdelaziz ◽  
...  

Textual entailment is a fundamental task in natural language processing. Most approaches for solving this problem use only the textual content present in training data. A few approaches have shown that information from external knowledge sources like knowledge graphs (KGs) can add value, in addition to the textual content, by providing background knowledge that may be critical for a task. However, the proposed models do not fully exploit the information in the usually large and noisy KGs, and it is not clear how it can be effectively encoded to be useful for entailment. We present an approach that complements text-based entailment models with information from KGs by (1) using Personalized PageRank to generate contextual subgraphs with reduced noise and (2) encoding these subgraphs using graph convolutional networks to capture the structural and semantic information in KGs. We evaluate our approach on multiple textual entailment datasets and show that the use of external knowledge helps the model to be robust and improves prediction accuracy. This is particularly evident in the challenging BreakingNLI dataset, where we see an absolute improvement of 5-20% over multiple text-based entailment models.


Sign in / Sign up

Export Citation Format

Share Document