Clarifying the Relations between Intellectual Humility and Misinformation Susceptibility

2021 ◽  
Author(s):  
Shauna Marie Bowes ◽  
Arber Tasimi

Misinformation is widespread and consequential. Thus, identifying psychological characteristics that might mitigate misinformation susceptibility represents a timely and pragmatically important issue. One construct that may be particularly relevant to misinformation susceptibility is intellectual humility (IH). As such, we examined whether IH is related to less misinformation susceptibility, what aspects of IH best predict misinformation susceptibility, and whether these relations are unique to IH. Across three samples, IH tended to manifest small-to-medium negative relations with misinformation susceptibility (pseudoscience, conspiracy theories, and fake news). IH measures assessing both intrapersonal and interpersonal features tended to be stronger correlates of misinformation susceptibility than measures assessing either intrapersonal or interpersonal features in isolation. These relations tended to remain robust after controlling for covariates (honesty-humility, cognitive reflection, political ideology). Future research should leverage our results to examine whether IH interventions not only reduce misinformation susceptibility but also lessen its appeal for those already committed to misinformation.

2020 ◽  
Author(s):  
Jay Joseph Van Bavel ◽  
Elizabeth Ann Harris ◽  
Philip Pärnamets ◽  
Steve Rathje ◽  
Kimberly Doell ◽  
...  

The spread of misinformation, including “fake news,” propaganda, and conspiracy theories, represents a serious threat to society, as it has the potential to alter beliefs, behavior, and policy. Research is beginning to disentangle how and why misinformation is spread and identify processes that contribute to this social problem. We propose an integrative model to understand the social, political, and cognitive psychology risk factors that underlie the spread of misinformation and highlight strategies that might be effective in mitigating this problem. However, the spread of misinformation is a rapidly growing and evolving problem; thus scholars need to identify and test novel solutions, and work with policy makers to evaluate and deploy these solutions. Hence, we provide a roadmap for future research to identify where scholars should invest their energy in order to have the greatest overall impact.


2020 ◽  
Author(s):  
Sinan Alper ◽  
Fatih Bayrak ◽  
Onurcan Yilmaz

COVID-19 pandemic has led to popular conspiracy theories regarding its origins and widespread concern over the level of compliance with preventive measures. In the current preregistered research, we recruited 1088 Turkish participants and investigated (a) individual differences associated with COVID-19 conspiracy beliefs; (b) whether such conspiracy beliefs are related to the level of preventive measures; and (c) other individual differences that might be related to the preventive measures. Higher faith in intuition, uncertainty avoidance, impulsivity, generic conspiracy beliefs, religiosity, and right-wing ideology, and a lower level of cognitive reflection were associated with a higher level of belief in COVID-19 conspiracy theories. There was no association between COVID-19 conspiracy beliefs and preventive measures while perceived risk was positively and impulsivity negatively correlated with preventive measures. We discuss the implications and directions for future research.


2021 ◽  
Author(s):  
Gordon Pennycook ◽  
David Gertler Rand

Simply failing to consider accuracy when deciding what to share on social media has been shown to play an important role in the spread of online misinformation. Interventions that shift users’ attention towards the concept of accuracy – accuracy prompts or nudges – are therefore a promising approach to improve the quality of news content that users share and therefore reduce misinformation online. Here we test the replicability and generalizability of this effect by conducting a personal meta-analysis of 20 accuracy prompt survey experiments (total N=26,245) that our group ran using American participants between 2017 and 2020. These experiments used a wide range of different accuracy prompts tested using a large variety of headline sets and with participants recruited from qualitatively different subject pools. We find that overall, accuracy prompts increased sharing discernment (difference in sharing intentions for true relative to false headlines) by 72% relative to the control, and that this effect was primarily driven by reducing sharing intentions for false headlines (10% reduction relative to the control). The magnitude of the accuracy prompt effect on sharing discernment did not significantly differ for headlines about politics versus COVID-19, and was larger for headline sets where users were less likely to distinguish between true and false headlines at baseline. With respect to individual-level variables, the treatment effect on sharing discernment was not significantly moderated by gender, race, or political ideology, and was significantly larger for older participants, participants who were higher on cognitive reflection, and participants who passed more attention check questions. These results suggest that accuracy prompt effects are replicable and generalize across prompts and headlines, and thus offer a promising approach for fighting against misinformation.


2020 ◽  
Author(s):  
Olivier Klein

This is a pdf of the original typed manuscript of a lecture made in 2006. An annotated English translation will be published by the International Review of Social Psychology. I this text, Moscovici seeks to update his earlier work on the “conspiracy mentality” (1987) by considering the relationships between social representations and conspiracy mentality. Innovation in this field, Moscovici argues, will require a much thorough description and understanding of what conspiracy theories are, what rhetoric they use and what functions they fulfill. Specifically, Moscovici considers conspiracies as a form of counterfactual history implying a more desirable world (in which the conspiracy did not take place) and suggests that social representation theory should tackle this phenomenon. He explicitly links conspiracy theories to works of fiction and suggests that common principles might explain their popularity. Historically, he argues, conspiracism was born twice: First, in the middle ages, when their primary function was to exclude and destroy what was considered as heresy; and second, after the French revolution, to delegitimize the Enlightenment, which was attributed to a small coterie of reactionaries rather than to the will of the people. Moscovici then considers four aspects (“thematas”) of conspiracy mentality: 1/ the prohibition of knowledge; 2/ the duality between the majority (the masses, prohibited to know) and “enlightened” minorities; 3/ the search for a common origin, a “ur phenomenon” that connects historical events and provides a continuity to History (he notes that such a tendency is also present in social psychological theorizing); and 4/ the valorization of tradition as a bulwark against modernity. Some of Moscovici’s insights in this talk have since been borne out by contemporary research on the psychology of conspiracy theories, but many others still remain fascinating potential avenues for future research.


Author(s):  
Giandomenico Di Domenico ◽  
Annamaria Tuan ◽  
Marco Visentin

AbstractIn the wake of the COVID-19 pandemic, unprecedent amounts of fake news and hoax spread on social media. In particular, conspiracy theories argued on the effect of specific new technologies like 5G and misinformation tarnished the reputation of brands like Huawei. Language plays a crucial role in understanding the motivational determinants of social media users in sharing misinformation, as people extract meaning from information based on their discursive resources and their skillset. In this paper, we analyze textual and non-textual cues from a panel of 4923 tweets containing the hashtags #5G and #Huawei during the first week of May 2020, when several countries were still adopting lockdown measures, to determine whether or not a tweet is retweeted and, if so, how much it is retweeted. Overall, through traditional logistic regression and machine learning, we found different effects of the textual and non-textual cues on the retweeting of a tweet and on its ability to accumulate retweets. In particular, the presence of misinformation plays an interesting role in spreading the tweet on the network. More importantly, the relative influence of the cues suggests that Twitter users actually read a tweet but not necessarily they understand or critically evaluate it before deciding to share it on the social media platform.


2021 ◽  
pp. 009164712110116
Author(s):  
David R. Paine ◽  
Steven J. Sandage ◽  
Joshua N. Hook ◽  
Don E. Davis ◽  
Kathryn A. Johnson

Scholars and practitioners have increasingly called for the development of social justice commitment, intercultural competence, and appreciation of diversity among ministers and helping professionals. In religious contexts, individual factors may contribute to differences in the degree to which spiritual leaders emphasize intercultural and social justice initiatives. Personality factors, such as virtues and specific moral commitments, predict the degree to which people report positive attitudes and demonstrate mature alterity. In this study, we explored the degree to which intellectual humility predicted mature alterity outcomes after controlling for the effects of five moral foundations (care, fairness, loyalty, authority, purity) in a sample of Christian seminary students in the United States. Implications and suggestions for future research are discussed for ministry and the helping professions.


Author(s):  
Vojtech Pisl ◽  
Jan Volavka ◽  
Edita Chvojkova ◽  
Katerina Cechova ◽  
Gabriela Kavalirova ◽  
...  

Understanding the predictors of belief in COVID-related conspiracy theories and willingness to get vaccinated against COVID-19 may aid the resolution of current and future pandemics. We investigate how psychological and cognitive characteristics influence general conspiracy mentality and COVID-related conspiracy theories. A cross-sectional study was conducted based on data from an online survey of a sample of Czech university students (n = 866) collected in January 2021, using multivariate linear regression and mediation analysis. Sixteen percent of respondents believed that COVID-19 is a hoax, and 17% believed that COVID-19 was intentionally created by humans. Seven percent of the variance of the hoax theory and 10% of the variance of the creation theory was explained by (in descending order of relevance) low cognitive reflection, low digital health literacy, high experience with dissociation and, to some extent, high bullshit receptivity. Belief in COVID-related conspiracy theories depended less on psychological and cognitive variables compared to conspiracy mentality (16% of the variance explained). The effect of digital health literacy on belief in COVID-related theories was moderated by cognitive reflection. Belief in conspiracy theories related to COVID-19 was influenced by experience with dissociation, cognitive reflection, digital health literacy and bullshit receptivity.


2021 ◽  
pp. 1-21
Author(s):  
Shahela Saif ◽  
Samabia Tehseen

Deep learning has been used in computer vision to accomplish many tasks that were previously considered too complex or resource-intensive to be feasible. One remarkable application is the creation of deepfakes. Deepfake images change or manipulate a person’s face to give a different expression or identity by using generative models. Deepfakes applied to videos can change the facial expressions in a manner to associate a different speech with a person than the one originally given. Deepfake videos pose a serious threat to legal, political, and social systems as they can destroy the integrity of a person. Research solutions are being designed for the detection of such deepfake content to preserve privacy and combat fake news. This study details the existing deepfake video creation techniques and provides an overview of the deepfake datasets that are publicly available. More importantly, we provide an overview of the deepfake detection methods, along with a discussion on the issues, challenges, and future research directions. The study aims to present an all-inclusive overview of deepfakes by providing insights into the deepfake creation techniques and the latest detection methods, facilitating the development of a robust and effective deepfake detection solution.


2018 ◽  
Vol 32 (4) ◽  
pp. 562-582 ◽  
Author(s):  
Lisa K Hartley ◽  
Joel R Anderson ◽  
Anne Pedersen

Abstract Over the past few decades, there has been a progressive implementation of policies designed to deter the arrival of people seeking protection. In Australia, this has included offshore processing and towing boats of asylum seekers away from Australian waters. In a community survey of 164 Australians, this study examined the predictive role of false beliefs about asylum seekers, prejudice and political ideology in support of three policies. Multiple hierarchical regression models indicated that, although political ideology and prejudice were significant predictors of policy support, false beliefs was the strongest predictor. For the policy of processing asylum seekers in the community, less endorsement of false beliefs was a significant predictor, while, for the policy of offshore processing, more endorsement of false beliefs was a significant predictor. For the boat turn-back policy, an increase in false-belief endorsement was the strongest predictor; although increases in prejudice and a prejudice–political ideology interaction (i.e. the predictive value of prejudice was stronger for participants who identified as politically conservative) also independently predicted support. Practical implications and future research avenues are discussed.


2021 ◽  
Vol 6 ◽  
Author(s):  
Magdalena Riedl ◽  
Carsten Schwemmer ◽  
Sandra Ziewiecki ◽  
Lisa M. Ross

Despite an increasing information overflow in the era of digital communication, influencers manage to draw the attention of their followers with an authentic and casual appearance. Reaching large audiences on social media, they can be considered as digital opinion leaders. In the past, they predominantly appeared as experts for topics like fashion, sports, or gaming and used their status to cooperate with brands for marketing purposes. However, since recently influencers also turn towards more meaningful and political content. In this article, we share our perspective on the rise of political influencers using examples of sustainability and related topics covered on Instagram. By applying a qualitative observational approach, we illustrate how influencers make political communication look easy, while at the same time seamlessly integrating product promotions in their social media feeds. In this context, we discuss positive aspects of political influencers like contributions to education and political engagement, but also negative aspects such as the potential amplification of radical political ideology or conspiracy theories. We conclude by highlighting political influencers as an important research topic for conceptual and empirical studies in the future.


Sign in / Sign up

Export Citation Format

Share Document