deceptive behavior
Recently Published Documents


TOTAL DOCUMENTS

77
(FIVE YEARS 26)

H-INDEX

12
(FIVE YEARS 1)

Author(s):  
Dario Pasquali ◽  
Jonas Gonzalez-Billandon ◽  
Alexander Mois Aroyo ◽  
Giulio Sandini ◽  
Alessandra Sciutti ◽  
...  

AbstractRobots destined to tasks like teaching or caregiving have to build a long-lasting social rapport with their human partners. This requires, from the robot side, to be capable of assessing whether the partner is trustworthy. To this aim a robot should be able to assess whether someone is lying or not, while preserving the pleasantness of the social interaction. We present an approach to promptly detect lies based on the pupil dilation, as intrinsic marker of the lie-associated cognitive load that can be applied in an ecological human–robot interaction, autonomously led by a robot. We demonstrated the validity of the approach with an experiment, in which the iCub humanoid robot engages the human partner by playing the role of a magician in a card game and detects in real-time the partner deceptive behavior. On top of that, we show how the robot can leverage on the gained knowledge about the deceptive behavior of each human partner, to better detect subsequent lies of that individual. Also, we explore whether machine learning models could improve lie detection performances for both known individuals (within-participants) over multiple interaction with the same partner, and with novel partners (between-participant). The proposed setup, interaction and models enable iCub to understand when its partners are lying, which is a fundamental skill for evaluating their trustworthiness and hence improving social human–robot interaction.


2021 ◽  
Author(s):  
Anastasia Shuster ◽  
Lilah Inzelberg ◽  
Ori Ossmy ◽  
Liz Izakson ◽  
Yael Hanein ◽  
...  
Keyword(s):  

Author(s):  
Fabio Giglietto ◽  
Nicola Righetti ◽  
Luca Rossi ◽  
Giada Marino

As much as the field of mis/disinformation studies flourished during the last few years, the efforts to measure its prevalence and effects are often hindered or significantly limited by the fuzziness of the phenomenon under study. Differently from approaches based on sole content or actors detection, the authors implement an integrated approach to understand the interplay between manipulative actors, deceptive behavior, and harmful content. To do so, the authors present a study on patterns of coordinated activity on Facebook (named “Coordinated Link Sharing Behavior”) during the first and second waves of the COVID-19 pandemic in Italy. Coordinated Link Sharing Behavior” (CLSB) refers to the coordinated shares of the same news article in a very short timeframe by networks of pages, groups, and verified public profiles. CLSB is a strategy used to boost the reach of content by gaming algorithms that govern the distribution of posts. Additionally, this coordinated activity has been proven to be consistently associated with the spread of problematic information before the 2018 and 2019 Italian elections. In this paper, the authors devote specific attention to discuss the methodology and tool employed in the context of existing literature on mis/disinformation as well as to present the results of the case study.


2021 ◽  
Vol 12 ◽  
Author(s):  
Fee-Elisabeth Hein ◽  
Anja Leue

Deception studies emphasize the important role of event-related potentials (ERPs) to uncover deceptive behavior based on underlying neuro-cognitive processes. The role of conflict monitoring as indicated by the frontal N2 component during truthful and deceptive responses was investigated in an adapted Concealed Information Test (CIT). Previously memorized pictures of faces should either be indicated as truthfully trustworthy, truthfully untrustworthy or trustworthy while concealing the actual untrustworthiness (untrustworthy-probe). Mean, baseline-to-peak and peak-to-peak amplitudes were calculated to examine the robustness of ERP findings across varying quantification techniques. Data of 30 participants (15 female; age: M = 23.73 years, SD = 4.09) revealed longer response times and lower correct rates for deceptive compared to truthful trustworthy responses. The frontal N2 amplitude was more negative for untrustworthy-probe and truthful untrustworthy compared to truthful trustworthy stimuli when measured as mean or baseline-to-peak amplitude. Results suggest that deception evokes conflict monitoring and ERP quantifications are differentially sensitive to a-priori hypotheses.


2021 ◽  
Vol 13 (13) ◽  
pp. 7012
Author(s):  
Cristina Chueca Vergara ◽  
Luis Ferruz Agudo

Current concerns about environmental issues have led to many new trends in technology and financial management. Within this context of digital transformation and sustainable finance, Fintech has emerged as an alternative to traditional financial institutions. This paper, through a literature review and case study approach, analyzes the relationship between Fintech and sustainability, and the different areas of collaboration between Fintech and sustainable finance, from both a theoretical and descriptive perspective, while giving specific examples of current technological platforms. Additionally, in this paper, two Fintech initiatives (Clarity AI and Pensumo) are described, as well as several proposals to improve the detection of greenwashing and other deceptive behavior by firms. The results lead to the conclusion that sustainable finance and Fintech have many aspects in common, and that Fintech can make financial businesses more sustainable overall by promoting green finance. Furthermore, this paper highlights the importance of European and global regulation, mainly from the perspective of consumer protection.


2021 ◽  
Author(s):  
Elef Schellen ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

As the use of humanoid robots proliferates, an increasing amount of people may find themselves face-to-“face” with a robot in everyday life. Although there is a plethora of information available on facial social cues and how we interpret them in the field of human-human social interaction, we cannot assume that these findings flawlessly transfer to human-robot interaction. Therefore, more research on facial cues in human-robot interaction is required. This study investigated deception in human-robot interaction context, focusing on the effect that eye contact with a robot has on honesty towards this robot. In an iterative task, participants could assist a humanoid robot by providing it with correct information, or potentially secure a reward for themselves by providing it with incorrect information. Results show that participants are increasingly honest after the robot establishes eye contact with them, but only if this is in response to deceptive behavior. Behavior is not influenced by the establishment of eye contact if the participant is actively engaging in honest behavior. These findings support the notion that humanoid robots can be perceived as, and treated like, social agents, since the herein described effect mirrors one present in human-human social interaction.


2021 ◽  
Vol 4 ◽  
Author(s):  
Elef Schellen ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

As the use of humanoid robots proliferates, an increasing amount of people may find themselves face-to-“face” with a robot in everyday life. Although there is a plethora of information available on facial social cues and how we interpret them in the field of human-human social interaction, we cannot assume that these findings flawlessly transfer to human-robot interaction. Therefore, more research on facial cues in human-robot interaction is required. This study investigated deception in human-robot interaction context, focusing on the effect that eye contact with a robot has on honesty toward this robot. In an iterative task, participants could assist a humanoid robot by providing it with correct information, or potentially secure a reward for themselves by providing it with incorrect information. Results show that participants are increasingly honest after the robot establishes eye contact with them, but only if this is in response to deceptive behavior. Behavior is not influenced by the establishment of eye contact if the participant is actively engaging in honest behavior. These findings support the notion that humanoid robots can be perceived as, and treated like, social agents, since the herein described effect mirrors one present in human-human social interaction.


Games ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 38
Author(s):  
Michael von Grundherr ◽  
Johanna Jauernig ◽  
Matthias Uhl

Hypocrisy is the act of claiming moral standards to which one’s own behavior does not conform. Instances of hypocrisy, such as the supposedly green furnishing group IKEA’s selling of furniture made from illegally felled wood, are frequently reported in the media. In a controlled and incentivized experiment, we investigate how observers rate different types of hypocritical behavior and if this judgment also translates into punishment. Results show that observers do, indeed, condemn hypocritical behavior strongly. The aversion to deceptive behavior is, in fact, so strong that even purely self-deceptive behavior is regarded as blameworthy. Observers who score high in the moral identity test have particularly strong reactions to acts of hypocrisy. The moral condemnation of hypocritical behavior, however, fails to produce a proportional amount of punishment. Punishment seems to be driven more by the violation of the norm of fair distribution than by moral pretense. From the viewpoint of positive retributivism, it is problematic if neither formal nor informal punishment follows moral condemnation.


2021 ◽  
Vol 2 (2) ◽  
pp. 41-55
Author(s):  
Enguo Wang ◽  
Li Tian ◽  
Wang Chao

Deceptive response may be influenced by the individual’s internal emotional experience and external emotional information. Deception can bring nervous, fearful or even emotional experiences to the deceiver. The emotional experience can also affect deceptive behavior. Based on previous studies, this paper used facial expressions as a stimulus material, combined with explicit tasks, to study the impact of emotional information on the inhibition process of deceptive responses. The experiment adopted the emotional Stroop paradigm, used event-related potential to discuss the neural mechanism of the influence of explicit emotional information on deception. In the explicit task, it was found that high intensity triggered greater P300 amplitudes, high-intensity negative emotions triggered greater LPC amplitudes, and deceptive responses triggered greater N200, P300 and LPC amplitudes. These results show that in the explicit tasks, the impact of emotional information on fraudulent responses runs through the three stages of executive function. This is, inhibition stage, conflict and reaction monitoring stage and implementation stage. This study also found that negative emotion information had greater influence on deceptive response in explicit tasks.


Sign in / Sign up

Export Citation Format

Share Document