scholarly journals A Bayesian-Rational Framework for Conspiracy Theories

2021 ◽  
Author(s):  
Gabriel Doyle

In our present era of fractured politics, social media, and fake news, conspiracy theories are as prominent as ever. While conspiracy theories are often dismissed as pathological or irrational reasoning, belief in at least some conspiracy theories could arise from a Bayesian rational system that is merely wrong, rather than truly irrational. This paper lays out a framework for understanding how conspiracy theories could arise from rational Bayesian cognition, identifying four potential sources for conspiracy theory belief in a primarily rational framework: elevated prior belief in CTs, different likelihoods, missing non-conspiratorial explanations, and non-epistemic utilities.

Author(s):  
Giandomenico Di Domenico ◽  
Annamaria Tuan ◽  
Marco Visentin

AbstractIn the wake of the COVID-19 pandemic, unprecedent amounts of fake news and hoax spread on social media. In particular, conspiracy theories argued on the effect of specific new technologies like 5G and misinformation tarnished the reputation of brands like Huawei. Language plays a crucial role in understanding the motivational determinants of social media users in sharing misinformation, as people extract meaning from information based on their discursive resources and their skillset. In this paper, we analyze textual and non-textual cues from a panel of 4923 tweets containing the hashtags #5G and #Huawei during the first week of May 2020, when several countries were still adopting lockdown measures, to determine whether or not a tweet is retweeted and, if so, how much it is retweeted. Overall, through traditional logistic regression and machine learning, we found different effects of the textual and non-textual cues on the retweeting of a tweet and on its ability to accumulate retweets. In particular, the presence of misinformation plays an interesting role in spreading the tweet on the network. More importantly, the relative influence of the cues suggests that Twitter users actually read a tweet but not necessarily they understand or critically evaluate it before deciding to share it on the social media platform.


Leadership ◽  
2019 ◽  
Vol 15 (2) ◽  
pp. 135-151 ◽  
Author(s):  
Hamid Foroughi ◽  
Yiannis Gabriel ◽  
Marianna Fotaki

This essay, and the special issue it introduces, seeks to explore leadership in a post-truth age, focusing in particular on the types of narratives and counter-narratives that characterize it and at times dominate it. We first examine the factors that are often held responsible for the rise of post-truth in politics, including the rise of relativist and postmodernist ideas, dishonest leaders and bullshit artists, the digital revolution and social media, the 2008 economic crisis and collapse of public trust. We develop the idea that different historical periods are characterized by specific narrative ecologies, which, by analogy to natural ecologies, can be viewed as spaces where different types of narrative and counter-narrative emerge, interact, compete, adapt, develop and die. We single out some of the dominant narrative types that characterize post-truth narrative ecologies and highlight the ability of language to ‘do things with words’ that support both the production of ‘fake news’ and a type of narcissistic leadership that thrive in these narrative ecologies. We then examine more widely leadership in post-truth politics focusing on the resurgence of populist and demagogical types along with the narratives that have made these types highly effective in our times. These include nostalgic narratives idealizing a fictional past and conspiracy theories aimed at arousing fears about a dangerous future.


10.2196/19458 ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. e19458 ◽  
Author(s):  
Wasim Ahmed ◽  
Josep Vidal-Alaball ◽  
Joseph Downing ◽  
Francesc López Seguí

Background Since the beginning of December 2019, the coronavirus disease (COVID-19) has spread rapidly around the world, which has led to increased discussions across online platforms. These conversations have also included various conspiracies shared by social media users. Amongst them, a popular theory has linked 5G to the spread of COVID-19, leading to misinformation and the burning of 5G towers in the United Kingdom. The understanding of the drivers of fake news and quick policies oriented to isolate and rebate misinformation are keys to combating it. Objective The aim of this study is to develop an understanding of the drivers of the 5G COVID-19 conspiracy theory and strategies to deal with such misinformation. Methods This paper performs a social network analysis and content analysis of Twitter data from a 7-day period (Friday, March 27, 2020, to Saturday, April 4, 2020) in which the #5GCoronavirus hashtag was trending on Twitter in the United Kingdom. Influential users were analyzed through social network graph clusters. The size of the nodes were ranked by their betweenness centrality score, and the graph’s vertices were grouped by cluster using the Clauset-Newman-Moore algorithm. The topics and web sources used were also examined. Results Social network analysis identified that the two largest network structures consisted of an isolates group and a broadcast group. The analysis also revealed that there was a lack of an authority figure who was actively combating such misinformation. Content analysis revealed that, of 233 sample tweets, 34.8% (n=81) contained views that 5G and COVID-19 were linked, 32.2% (n=75) denounced the conspiracy theory, and 33.0% (n=77) were general tweets not expressing any personal views or opinions. Thus, 65.2% (n=152) of tweets derived from nonconspiracy theory supporters, which suggests that, although the topic attracted high volume, only a handful of users genuinely believed the conspiracy. This paper also shows that fake news websites were the most popular web source shared by users; although, YouTube videos were also shared. The study also identified an account whose sole aim was to spread the conspiracy theory on Twitter. Conclusions The combination of quick and targeted interventions oriented to delegitimize the sources of fake information is key to reducing their impact. Those users voicing their views against the conspiracy theory, link baiting, or sharing humorous tweets inadvertently raised the profile of the topic, suggesting that policymakers should insist in the efforts of isolating opinions that are based on fake news. Many social media platforms provide users with the ability to report inappropriate content, which should be used. This study is the first to analyze the 5G conspiracy theory in the context of COVID-19 on Twitter offering practical guidance to health authorities in how, in the context of a pandemic, rumors may be combated in the future.


Author(s):  
Cristina Pulido Rodríguez ◽  
Beatriz Villarejo Carballido ◽  
Gisela Redondo-Sama ◽  
Mengna Guo ◽  
Mimar Ramis ◽  
...  

Since the Coronavirus health emergency was declared, many are the fake news that have circulated around this topic, including rumours, conspiracy theories and myths. According to the World Economic Forum, fake news is one of the threats in today's societies, since this type of information circulates fast and is often inaccurate and misleading. Moreover, fake-news are far more shared than evidence-based news among social media users and thus, this can potentially lead to decisions that do not consider the individual’s best interest. Drawing from this evidence, the present study aims at comparing the type of Tweets and Sina Weibo posts regarding COVID-19 that contain either false or scientific veracious information. To that end 1923 messages from each social media were retrieved, classified and compared. Results show that there is more false news published and shared on Twitter than in Sina Weibo, at the same time science-based evidence is more shared on Twitter than in Weibo but less than false news. This stresses the need to find effective practices to limit the circulation of false information.


10.2196/26527 ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. e26527
Author(s):  
Dax Gerts ◽  
Courtney D Shelley ◽  
Nidhi Parikh ◽  
Travis Pitts ◽  
Chrysm Watson Ross ◽  
...  

Background The COVID-19 outbreak has left many people isolated within their homes; these people are turning to social media for news and social connection, which leaves them vulnerable to believing and sharing misinformation. Health-related misinformation threatens adherence to public health messaging, and monitoring its spread on social media is critical to understanding the evolution of ideas that have potentially negative public health impacts. Objective The aim of this study is to use Twitter data to explore methods to characterize and classify four COVID-19 conspiracy theories and to provide context for each of these conspiracy theories through the first 5 months of the pandemic. Methods We began with a corpus of COVID-19 tweets (approximately 120 million) spanning late January to early May 2020. We first filtered tweets using regular expressions (n=1.8 million) and used random forest classification models to identify tweets related to four conspiracy theories. Our classified data sets were then used in downstream sentiment analysis and dynamic topic modeling to characterize the linguistic features of COVID-19 conspiracy theories as they evolve over time. Results Analysis using model-labeled data was beneficial for increasing the proportion of data matching misinformation indicators. Random forest classifier metrics varied across the four conspiracy theories considered (F1 scores between 0.347 and 0.857); this performance increased as the given conspiracy theory was more narrowly defined. We showed that misinformation tweets demonstrate more negative sentiment when compared to nonmisinformation tweets and that theories evolve over time, incorporating details from unrelated conspiracy theories as well as real-world events. Conclusions Although we focus here on health-related misinformation, this combination of approaches is not specific to public health and is valuable for characterizing misinformation in general, which is an important first step in creating targeted messaging to counteract its spread. Initial messaging should aim to preempt generalized misinformation before it becomes widespread, while later messaging will need to target evolving conspiracy theories and the new facets of each as they become incorporated.


Author(s):  
K. V. Asmolov ◽  

April 16, 2020 marked the 6th anniversary of the Sewol ferry disaster. The tragic event roused South Korean civilians and gave rise to numerous rumors and propaganda myths, which formed the basis for the so-called “candle revolution”. They included both a conspiracy theory about the disaster, and the thesis that the main cause of the death of children was the criminal inaction of the corrupted Park Geun-hye administration and the ex-President herself. The examination of the remains of the ferry raised in 2017 after the impeachment of Park drew a thick line under the conspiracy theories (explosion on board, collision with a US submarine, etc.). Nevertheless, the question “what went wrong?” remains unanswered despite politically committed investigations. Meanwhile, most of the rumors circulating after the tragedy that brought people to the streets were never proven. Moreover, Park Geun-hye was acquitted by the Court of the charges related to the Sewol. The facts show that the blame for a great number of victims is not on the Blue house, but on the local authorities, who were unable to effectively conduct rescue operations, and later openly practiced window-dressing and misinformed their superiors. The President therefore did not receive timely information that would require a rapid response on her part. As the case involved the province of Jeolla (a regional stronghold of the Democrats), the opposition expressing storm of abuse had the advantage of shifting the blame from them to the Central government and it succeeded in doing this. The information campaign of Park Geun-hye's opponents contributed to her wilt and the formation in the mass consciousness of the former President’s image that played its role in the later developments.


2021 ◽  
pp. 016555152098548
Author(s):  
Anastasia Giachanou ◽  
Bilal Ghanem ◽  
Paolo Rosso

The rise of social media has offered a fast and easy way for the propagation of conspiracy theories and other types of disinformation. Despite the research attention that has received, fake news detection remains an open problem and users keep sharing articles that contain false statements but which they consider real. In this article, we focus on the role of users in the propagation of conspiracy theories that is a specific type of disinformation. First, we compare profile and psycho-linguistic patterns of online users that tend to propagate posts that support conspiracy theories and of those who propagate posts that refute them. To this end, we perform a comparative analysis over various profile, psychological and linguistic characteristics using social media texts of users that share posts about conspiracy theories. Then, we compare the effectiveness of those characteristics for predicting whether a user is a conspiracy propagator or not. In addition, we propose ConspiDetector, a model that is based on a convolutional neural network (CNN) and which combines word embeddings with psycho-linguistic characteristics extracted from the tweets of users to detect conspiracy propagators. The results show that ConspiDetector can improve the performance in detecting conspiracy propagators by 8.82% compared with the CNN baseline with regard to F1-metric.


2021 ◽  
Vol 82 ◽  
pp. 159-182
Author(s):  
Maris Kuperjanov ◽  

The aim of the article is to give an overview of the first month of the novel coronavirus outbreak and of the public reactions to the news in media comments and social media environments in both local Estonian and global contexts. The pandemic was still ongoing at the time the article was published and, with some modifications and new emphases, vernacular reactions in the media (incl. social media) continued flourishing. During the first month (January 2020), the growing flow of information and rapid escalation of the situation made the topic more noticeable in both the media and social media, and thus provided a fertile basis for jokes and internet memes, legends, fake news, misinformation, conspiracy theories, etc., as was the case with the former bigger epidemics and pandemics. As it has also been observed previously, the consequences of some fake news, misinformation, and conspiracy theories may often be more harmful for society than the disease itself. Several motifs and storylines are universal and surge as similar situations arise both in Estonia and all over the world. The article also presents a selection of more prominent topics and examples of the outbreak from social media environments during the initial phase of international awareness of the novel coronavirus.


Author(s):  
Ondřej Procházka ◽  
Jan Blommaert

Abstract Conspiracy theories are often disqualified as inadequate and deliberate forms of misinformation. In this analysis, we engage with a specific case, the conspiracy theory developed on an online New Right forum called Q about the so-called “MAGA Kid incident” with focus on its circulation and uptake on Facebook. Drawing on ethnomethodological principles, the analysis shows how ergoic argumentation is systematically being deployed as a means of debunking rational-factual discourses about such incidents. While rationality itself is being rejected, conspiracy theorists deploy “reasonable” knowledge tactics. The paper shows how conspiracy theorists skillfully mobilize social media affordances, particularly Internet memes, to promote conspiracism as a form of inclusive political activism as well as a legitimate and “critical” mode of reasoning.


Sign in / Sign up

Export Citation Format

Share Document