cues to deception
Recently Published Documents


TOTAL DOCUMENTS

82
(FIVE YEARS 16)

H-INDEX

21
(FIVE YEARS 1)

2021 ◽  
pp. 014616722110597
Author(s):  
Christopher A. Gunderson ◽  
Alysha Baker ◽  
Alona D. Pence ◽  
Leanne ten Brinke

Emotional expressions evoke predictable responses from observers; displays of sadness are commonly met with sympathy and help from others. Accordingly, people may be motivated to feign emotions to elicit a desired response. In the absence of suspicion, we predicted that emotional and behavioral responses to genuine (vs. deceptive) expressers would be guided by empirically valid cues of sadness authenticity. Consistent with this hypothesis, untrained observers (total N = 1,300) reported less sympathy and offered less help to deceptive (vs. genuine) expressers of sadness. This effect was replicated using both posed, low-stakes, laboratory-created stimuli, and spontaneous, real, high-stakes emotional appeals to the public. Furthermore, lens models suggest that sympathy reactions were guided by difficult-to-fake facial actions associated with sadness. Results suggest that naive observers use empirically valid cues to deception to coordinate social interactions, providing novel evidence that people are sensitive to subtle cues to deception.


2021 ◽  
Vol 11 (19) ◽  
pp. 8817
Author(s):  
Ángela Almela

In the last decade, fields such as psychology and natural language processing have devoted considerable attention to the automatization of the process of deception detection, developing and employing a wide array of automated and computer-assisted methods for this purpose. Similarly, another emerging research area is focusing on computer-assisted deception detection using linguistics, with promising results. Accordingly, in the present article, the reader is firstly provided with an overall review of the state of the art of corpus-based research exploring linguistic cues to deception as well as an overview on several approaches to the study of deception and on previous research into its linguistic detection. In an effort to promote corpus-based research in this context, this study explores linguistic cues to deception in the Spanish written language with the aid of an automatic text classification tool, by means of an ad hoc corpus containing ground truth data. Interestingly, the key findings reveal that, although there is a set of linguistic cues which contributes to the global statistical classification model, there are some discursive differences across the subcorpora, yielding better classification results on the analysis conducted on the subcorpus containing emotionally loaded language.


2021 ◽  
Author(s):  
Bruno Verschuere ◽  
Chu-Chien Lin ◽  
Sara Huismann ◽  
Bennett Kleinberg ◽  
Ewout Meijer

Could a simple rule of thumb help to find the truth? People struggle with integrating many putative cues to deception into an accurate veracity judgement. Heuristics simplify difficult decisions by ignoring most of the information and relying instead only on a few but highly diagnostic cues (’Use the best, ignore the rest’). We examined whether people would be able to tell lie from truth when instructed to make decisions based on a single, diagnostic cue (verifiability and richness in detail). We show that these simple judgements by lay people allowed to discriminate dishonest from honest statements. These judgements performed at or above state-of-the-art, resource-intensive content analysis by trained coders. For a tech- and training-free approach, heuristics were surprisingly accurate, and hold promise for practice.


2020 ◽  
Vol 41 (5) ◽  
pp. 993-1015
Author(s):  
Margarethe McDonald ◽  
Elizabeth Mormer ◽  
Margarita Kaushanskaya

AbstractAcoustic cues to deception on a picture-naming task were analyzed in three groups of English speakers: monolinguals, bilinguals with English as their first language, and bilinguals with English as a second language. Results revealed that all participants had longer reaction times when generating falsehoods than when producing truths, and that the effect was more robust for English as a second language bilinguals than for the other two groups. Articulation rate was higher for all groups when producing lies. Mean fundamental frequency and intensity cues were not reliable cues to deception, but there was lower variance in both of these parameters when generating false versus true labels for all participants. Results suggest that naming latency was the only cue to deception that differed by language background. These findings broadly support the cognitive-load theory of deception, suggesting that a combination of producing deceptive speech and using a second language puts an extra load on the speaker.


2020 ◽  
Vol 8 ◽  
pp. 199-214
Author(s):  
Xi (Leslie) Chen ◽  
Sarah Ita Levitan ◽  
Michelle Levine ◽  
Marko Mandic ◽  
Julia Hirschberg

Humans rarely perform better than chance at lie detection. To better understand human perception of deception, we created a game framework, LieCatcher, to collect ratings of perceived deception using a large corpus of deceptive and truthful interviews. We analyzed the acoustic-prosodic and linguistic characteristics of language trusted and mistrusted by raters and compared these to characteristics of actual truthful and deceptive language to understand how perception aligns with reality. With this data we built classifiers to automatically distinguish trusted from mistrusted speech, achieving an F1 of 66.1%. We next evaluated whether the strategies raters said they used to discriminate between truthful and deceptive responses were in fact useful. Our results show that, although several prosodic and lexical features were consistently perceived as trustworthy, they were not reliable cues. Also, the strategies that judges reported using in deception detection were not helpful for the task. Our work sheds light on the nature of trusted language and provides insight into the challenging problem of human deception detection.


PLoS ONE ◽  
2020 ◽  
Vol 15 (3) ◽  
pp. e0229486
Author(s):  
Josiah P. J. King ◽  
Jia E. Loy ◽  
Hannah Rohde ◽  
Martin Corley

Sign in / Sign up

Export Citation Format

Share Document