scholarly journals Inquisitive but Not Discerning: Deprivation Curiosity is Associated with Excessive Openness to Inaccurate Information

2021 ◽  
Author(s):  
Claire Marie Zedelius ◽  
Madeleine Gross ◽  
Jonathan Schooler

Epistemic curiosity (the desire for knowledge) is considered a catalyst for learning and innovation. The current research reveals another, darker side of curiosity, which emerges when we examine the independent contributions of the two facets that make up epistemic curiosity—interest and deprivation curiosity. In four preregistered studies (collective N = 2020), we show that interest curiosity, a facet of curiosity motivated by the joy of exploration, is associated with traits and abilities that benefit learning. These include general knowledge (Studies 1-4), intellectual humility (Studies 1-4), responsiveness to new information (Studies 1, 3 & 4), and accuracy in distinguishing real and made-up concepts (Studies 1-4). In contrast, deprivation curiosity, which is motivated by the desire to reduce uncertainty, is associated with mistakes and confusion. Individuals high in deprivation curiosity claim familiarity with novel information (Studies 1 & 3) and made-up concepts (Studies 1-4). They find meaning in pseudo-profound and pseudo-scientific “bullshit” (Studies 3 & 4) and are prone to believing and sharing disinformation or “fake news” (Study 4). To make matters worse, they lack intellectually humility (Studies 1-4), and are thus unlikely to recognize their mistakes. We find that these difficulties are not explained by narcissistic self-enhancement (Study 2) or lack of analytic thinking (Study 4), and only partially accounted for by need for closure (Study 3). We theorize that deprivation curiosity is characterized by an indiscriminate openness to information.

2019 ◽  
Author(s):  
Robert M Ross ◽  
David Gertler Rand ◽  
Gordon Pennycook

Why is misleading partisan content believed and shared? An influential account posits that political partisanship pervasively biases reasoning, such that engaging in analytic thinking exacerbates motivated reasoning and, in turn, the acceptance of hyperpartisan content. Alternatively, it may be that susceptibility to hyperpartisan misinformation is explained by a lack of reasoning. Across two studies using different subject pools (total N = 1977), we had participants assess true, false, and hyperpartisan headlines taken from social media. We found no evidence that analytic thinking was associated with increased polarization for either judgments about the accuracy of the headlines or willingness to share the news content on social media. Instead, analytic thinking was broadly associated with an increased capacity to discern between true headlines and either false or hyperpartisan headlines. These results suggest that reasoning typically helps people differentiate between low and high quality news content, rather than facilitating political bias.


2021 ◽  
Author(s):  
Shauna Marie Bowes ◽  
Arber Tasimi

Misinformation is widespread and consequential. Thus, identifying psychological characteristics that might mitigate misinformation susceptibility represents a timely and pragmatically important issue. One construct that may be particularly relevant to misinformation susceptibility is intellectual humility (IH). As such, we examined whether IH is related to less misinformation susceptibility, what aspects of IH best predict misinformation susceptibility, and whether these relations are unique to IH. Across three samples, IH tended to manifest small-to-medium negative relations with misinformation susceptibility (pseudoscience, conspiracy theories, and fake news). IH measures assessing both intrapersonal and interpersonal features tended to be stronger correlates of misinformation susceptibility than measures assessing either intrapersonal or interpersonal features in isolation. These relations tended to remain robust after controlling for covariates (honesty-humility, cognitive reflection, political ideology). Future research should leverage our results to examine whether IH interventions not only reduce misinformation susceptibility but also lessen its appeal for those already committed to misinformation.


2001 ◽  
Vol 95 (2) ◽  
pp. 379-396 ◽  
Author(s):  
Martin Gilens

In contrast with the expectations of many analysts, I find that raw policy-specific facts, such as the direction of change in the crime rate or the amount of the federal budget devoted to foreign aid, have a significant influence on the public’s political judgments. Using both traditional survey methods and survey-based randomized experiments, I show that ignorance of policy-specific information leads many Americans to hold political views different from those they would hold otherwise. I also show that the effect of policy-specific information is not adequately captured by the measures of general political knowledge used in previous research. Finally, I show that the effect of policy-specific ignorance is greatest for Americans with the highest levels of political knowledge. Rather than serve to dilute the influence of new information, general knowledge (and the cognitive capacities it reflects) appears to facilitate the incorporation of new policy-specific information into political judgments.


Author(s):  
Aumyo Hassan ◽  
Sarah J. Barber

AbstractRepeated information is often perceived as more truthful than new information. This finding is known as the illusory truth effect, and it is typically thought to occur because repetition increases processing fluency. Because fluency and truth are frequently correlated in the real world, people learn to use processing fluency as a marker for truthfulness. Although the illusory truth effect is a robust phenomenon, almost all studies examining it have used three or fewer repetitions. To address this limitation, we conducted two experiments using a larger number of repetitions. In Experiment 1, we showed participants trivia statements up to 9 times and in Experiment 2 statements were shown up to 27 times. Later, participants rated the truthfulness of the previously seen statements and of new statements. In both experiments, we found that perceived truthfulness increased as the number of repetitions increased. However, these truth rating increases were logarithmic in shape. The largest increase in perceived truth came from encountering a statement for the second time, and beyond this were incrementally smaller increases in perceived truth for each additional repetition. These findings add to our theoretical understanding of the illusory truth effect and have applications for advertising, politics, and the propagation of “fake news.”


2020 ◽  
Author(s):  
Jonathon McPhetres ◽  
David Gertler Rand ◽  
Gordon Pennycook

A major focus of current research is understanding why people fall for and share fake news on social media. While much research focuses on understanding the role of personality-level traits for those who share the news, such as partisanship and analytic thinking, characteristics of the articles themselves have not been studied. Across two pre-registered studies, we examined whether character deprecation headlines—headlines designed to deprecate someone’s character, but which have no impact on policy or legislation—increased the likelihood of self-reported sharing on social media. In Study 1 we harvested fake news from online sources and compared sharing intentions between Republicans and Democrats. Results showed that, compared to Democrats, Republicans had greater intention to share character-deprecation headlines compared to news with policy implications. We then applied these findings experimentally. In Study 2 we developed a set of fake news that was matched for content across pro-Democratic and pro-Republican headlines and across news focusing on a specific person (e.g., Trump) versus a generic person (e.g., a Republican). We found that, contrary to Study 1, Republicans were no more inclined toward character deprecation than Democrats. However, these findings suggest that while character assassination may be a feature of pro-Republican news, it is not more attractive to Republicans versus Democrats. News with policy implications, whether fake or real, seems consistently more attractive to members of both parties regardless of whether it attempts to deprecate an opponent’s character. Thus, character-deprecation in fake news may in be in supply, but not in demand.


Author(s):  
Volodymyr Bazylevych ◽  
◽  
Maria Prybytko ◽  

Urgency of the research. Today, the task of analyzing the veracity of information in the news, which filled all existing channels for obtaining information, is relevant. Its urgency is related to the need to prevent panic by obtaining inaccurate information, debunking pseudo-scientific facts that can threaten people's lives, combating political propaganda and others.Target settingThis article focuses on the concept of developing a system for detecting fake news, analysis of existing systems and their principles of operation, principles of construction of their algorithms and features of their use.Actual scientific researches and issues analysis.Recent open publications, statistics, and corporate reports were reviewed.Uninvestigated parts of general matters defining.File analysis will be performed using three methods / classifiers and without the use of PassiveAgressive classifier. The calculation and derivation of results is performed by constructing error matrices and calculating accuracy.The research objective.The main purpose of the work is to create a system for detecting fake news on the basis of the considered materials and to achieve the highest possible accuracy.Presenting main material. Input data for the study were selected, prepared and analyzed. Data were studied using the meth-ods /classifiers of Logistic Regression, Decision Tree and Random Forest. The accuracy of detecting fake news is calculated.Conclusions.The proposed system allows to classify news as “fake”or “true ”with an accuracy of 98-99%


Author(s):  
Kam Hou Vat

The last decade of the 20th century saw explosive growth in discussions about knowledge—knowledge work, knowledge management, knowledge-based organizations, and the knowledge economy (Cortada & Woods, 2000). Against this backdrop, enterprises including educational institutes are challenged to do things faster, better, and more cost-effectively in order to remain competitive in an increasingly global environment (Stalk, Evans & Shulman, 1992). There is a strong need to share knowledge in a way that makes it easier for individuals, teams, and enterprises to work together to effectively contribute to an organization’s success. This idea of knowledge sharing has well been exemplified in the notion of a learning organization (LO) (Senge, 1990; Garvin, 1993; King, 1996; Levine, 2001). Essentially, a learning organization could be considered as an organization that focuses on developing and using its information and knowledge capabilities in order to create higher-value information and knowledge, to modify behaviors to reflect new knowledge and insights, and to improve bottom-line results. Consequently, there are many possible instances of information system (IS) design and realization that could be incorporated into a learning organization. The acronym “LOIS” (Learning Organization Information System) (Williamson & Lliopoulos, 2001) as applied to an organization is often used as a collective term representing the conglomeration of various information systems, each of which, being a functionally defined subsystem of the enterprise LOIS, is distinguished through the services it renders. For example, if a LOIS could support structured and unstructured dialogue and negotiation among the organizational members, then the LOIS subsystems might need to support reflection and creative synthesis of information and knowledge, and thus integrate working and learning. Also, if each member of an organization is believed to possess his or her own knowledge space, which is subject to some level of description, and thus may be integrated into an organization’s communal knowledge space (Wiig, 1993; Davenport & Prusak, 1998; Levine, 2001), the LOIS subsystems should help document information and knowledge as it builds up, say, by electronic journals. Or, they have to make recorded information and knowledge retrievable, and individuals with information and knowledge accessible. Collectively, a LOIS can be considered as a scheme to improve the organization’s chances for success and survival by continuously adapting to the external environment. That way, we stand a better chance of increasing social participation and shared understanding within the enterprise, and thus foster better learning. More importantly, the philosophy underlying the LOIS design should recognize that our knowledge is the amassed thought and experience of innumerable minds, and LOIS helps capture and reuse those experiences and insights in the enterprise. Indeed, the cultivation of an organization’s communal knowledge space—one that develops new forms of knowledge from that which exists among its members, based on seeing knowledge as a social phenomenon, and not merely as a ‘thing’—is fundamental to enterprises that intend to establish, grow, and nurture a learning organization, be it physical or digital (Hackbarth & Groven, 1999), where individuals grow intellectually and expand their knowledge by unlearning inaccurate information and relearning new information. The theme of this article is to examine the knowledge processes required of the learning organization viewed from the community of practice viewpoint, to develop and sustain the communal knowledge space through the elaboration of suitable LOIS support so as to expand an organization’s capacity to adapt to future challenges.


Author(s):  
Hicham Hage ◽  
Esma Aïmeur ◽  
Amel Guedidi

While fake and distorted information has been part of our history, new information and communication technologies tremendously increased its reach and proliferation speed. Indeed, in current days, fake news has become a global issue, prompting reactions from both researchers and legislators in an attempt to solve this problem. However, fake news and misinformation are part of the larger landscape of online deception. Specifically, the purpose of this chapter is to present an overview of online deception to better frame and understand the problem of fake news. In detail, this chapter offers a brief introduction to social networking sites, highlights the major factors that render individuals more susceptible to manipulation and deception, detail common manipulation and deception techniques and how they are actively used in online attacks as well as their common countermeasures. The chapter concludes with a discussion on the double role or artificial intelligence in countering as well as creating fake news.


Author(s):  
Nathan Herdener ◽  
Benjamin A. Clegg ◽  
Christopher D. Wickens ◽  
C. A. P. Smith

Objective: The aim of this study was to explore the impact of prior information on spatial prediction and understanding of variability. Background: In uncertain spatial prediction tasks, such as hurricane forecasting or planning search-and-rescue operations, decision makers must consider the most likely case and the distribution of possible outcomes. Base performance on these tasks is varied (and in the case of understanding the distribution, often poor). Humans must update mental models and predictions with new information, sometimes under cognitive workload. Method: In a spatial-trajectory prediction task, participants were anchored on accurate or inaccurate information, or not anchored, regarding the future behavior of an object (both average behavior and the variability). Subsequently, they predicted an object’s future location and estimated its likelihood at multiple locations. In a second experiment, participants repeated the process under varying levels of external cognitive workload. Results: Anchoring influenced understanding of most likely predicted location, with fairly rapid adjustment following inaccurate anchors. Increasing workload resulted in decreased overall performance and an impact on the adjustment component of the task. Overconfidence was present in all conditions. Conclusion: Prior information exerted short-term influence on spatial predictions. Cognitive load impaired users’ ability to effectively adjust to new information. Accurate graphical anchors did not improve user understanding of variability. Application: Prior briefings or forecasts about spatiotemporal trajectories affect decisions even in the face of initial contradictory information. To best support spatial prediction tasks, efforts also need to be made to separate extraneous load-causing tasks from the process of integrating new information. Implications are discussed.


2020 ◽  
Vol 177 (1) ◽  
pp. 125-131 ◽  
Author(s):  
Usha M Rodrigues ◽  
Jian Xu

During the recent outbreak of coronavirus, the concern about proliferation of misleading information, rumours and myths has caused governments across the world to institute various interventionist steps to stem their flow. Each government has had to balance the dichotomy between freedom of expression and people’s right to be safe from the adverse impact of inaccurate information. Governments across the world have implemented a number of strategies to manage COVID-19 including issuing public advisories, advertising campaigns, holding press conferences and instituting punitive regulations to combat the distribution of false and misleading information. We examine the two most populous countries’ governments’ response to the scourge of fake news during COVID-19. China and India are the most challenging nations to govern in terms of their sheer size and diversity of their population. Each country’s government has taken several steps to minimise the impact of fake news during COVID, within its own political system.


Sign in / Sign up

Export Citation Format

Share Document