Detection of conspiracy propagators using psycho-linguistic characteristics

2021 ◽  
pp. 016555152098548
Author(s):  
Anastasia Giachanou ◽  
Bilal Ghanem ◽  
Paolo Rosso

The rise of social media has offered a fast and easy way for the propagation of conspiracy theories and other types of disinformation. Despite the research attention that has received, fake news detection remains an open problem and users keep sharing articles that contain false statements but which they consider real. In this article, we focus on the role of users in the propagation of conspiracy theories that is a specific type of disinformation. First, we compare profile and psycho-linguistic patterns of online users that tend to propagate posts that support conspiracy theories and of those who propagate posts that refute them. To this end, we perform a comparative analysis over various profile, psychological and linguistic characteristics using social media texts of users that share posts about conspiracy theories. Then, we compare the effectiveness of those characteristics for predicting whether a user is a conspiracy propagator or not. In addition, we propose ConspiDetector, a model that is based on a convolutional neural network (CNN) and which combines word embeddings with psycho-linguistic characteristics extracted from the tweets of users to detect conspiracy propagators. The results show that ConspiDetector can improve the performance in detecting conspiracy propagators by 8.82% compared with the CNN baseline with regard to F1-metric.

Author(s):  
Lena Nadarevic ◽  
Rolf Reber ◽  
Anne Josephine Helmecke ◽  
Dilara Köse

Abstract To better understand the spread of fake news in the Internet age, it is important to uncover the variables that influence the perceived truth of information. Although previous research identified several reliable predictors of truth judgments—such as source credibility, repeated information exposure, and presentation format—little is known about their simultaneous effects. In a series of four experiments, we investigated how the abovementioned factors jointly affect the perceived truth of statements (Experiments 1 and 2) and simulated social media postings (Experiments 3 and 4). Experiment 1 explored the role of source credibility (high vs. low vs. no source information) and presentation format (with vs. without a picture). In Experiments 2 and 3, we additionally manipulated repeated exposure (yes vs. no). Finally, Experiment 4 examined the role of source credibility (high vs. low) and type of repetition (congruent vs. incongruent vs. no repetition) in further detail. In sum, we found no effect of presentation format on truth judgments, but strong, additive effects of source credibility and repetition. Truth judgments were higher for information presented by credible sources than non-credible sources and information without sources. Moreover, congruent (i.e., verbatim) repetition increased perceived truth whereas semantically incongruent repetition decreased perceived truth, irrespectively of the source. Our findings show that people do not rely on a single judgment cue when evaluating a statement’s truth but take source credibility and their meta-cognitive feelings into account.


Author(s):  
Christian Rudeloff ◽  
Stefanie Pakura ◽  
Fabian Eggers ◽  
Thomas Niemand

AbstractThis manuscript analyzes start-ups’ usage of different communication strategies (information, response, involvement), their underlying decision logics (effectuation, causation, strategy absence) and respective social media success. A multitude of studies have been published on the decision logics of entrepreneurs as well as on different communication strategies. Decision logics and according strategies and actions are closely connected. Still, research on the interplay between the two areas is largely missing. This applies in particular to the effect of different decision logics and communication models on social media success. Through a combination of case studies with fuzzy-set Qualitative Comparative Analysis this exploratory study demonstrates that different combinations of causal and absence of strategy decision logics can be equally successful when it comes to social media engagement, whereas effectuation is detrimental for success. Furthermore, we find that two-way-communication is essential to create engagement, while information strategy alone cannot lead to social media success. This study provides new insights into the role of decision logics and connects effectuation theory with the communication literature, a field that has been dominated by causal approaches.


Author(s):  
Giandomenico Di Domenico ◽  
Annamaria Tuan ◽  
Marco Visentin

AbstractIn the wake of the COVID-19 pandemic, unprecedent amounts of fake news and hoax spread on social media. In particular, conspiracy theories argued on the effect of specific new technologies like 5G and misinformation tarnished the reputation of brands like Huawei. Language plays a crucial role in understanding the motivational determinants of social media users in sharing misinformation, as people extract meaning from information based on their discursive resources and their skillset. In this paper, we analyze textual and non-textual cues from a panel of 4923 tweets containing the hashtags #5G and #Huawei during the first week of May 2020, when several countries were still adopting lockdown measures, to determine whether or not a tweet is retweeted and, if so, how much it is retweeted. Overall, through traditional logistic regression and machine learning, we found different effects of the textual and non-textual cues on the retweeting of a tweet and on its ability to accumulate retweets. In particular, the presence of misinformation plays an interesting role in spreading the tweet on the network. More importantly, the relative influence of the cues suggests that Twitter users actually read a tweet but not necessarily they understand or critically evaluate it before deciding to share it on the social media platform.


2019 ◽  
Author(s):  
Robert M Ross ◽  
David Gertler Rand ◽  
Gordon Pennycook

Why is misleading partisan content believed and shared? An influential account posits that political partisanship pervasively biases reasoning, such that engaging in analytic thinking exacerbates motivated reasoning and, in turn, the acceptance of hyperpartisan content. Alternatively, it may be that susceptibility to hyperpartisan misinformation is explained by a lack of reasoning. Across two studies using different subject pools (total N = 1977), we had participants assess true, false, and hyperpartisan headlines taken from social media. We found no evidence that analytic thinking was associated with increased polarization for either judgments about the accuracy of the headlines or willingness to share the news content on social media. Instead, analytic thinking was broadly associated with an increased capacity to discern between true headlines and either false or hyperpartisan headlines. These results suggest that reasoning typically helps people differentiate between low and high quality news content, rather than facilitating political bias.


2019 ◽  
Vol 82 (1) ◽  
pp. 42-59
Author(s):  
H Van den Bulck ◽  
A Hyzen

This contribution analyses the nexus between contemporary US populist nationalism and the post-global media ecology through the case of US radio show host and ‘most paranoid man in America’ Alex Jones and his Infowars. It evaluates the role of Alt Right alternative/activist media and global digital platforms in the success of Jones as ideological entrepreneur. To this end, it looks at Jones’ and Infowars' message (mostly Falls Flag conspiracy theories and pseudo-science-meets-popular-culture fantasy), persona as celebrity populist spectacle, business model, political alliances with Alt Right and Trump, audience as diverse mix of believers and ironic spectators and, most of all, media. In particular, we analyse the mix of legacy and social media and their respective role in his rise and alleged downfall. We evaluate Jones’ efforts as effective ideological entrepreneur, pushing his counter-hegemonic ideology from the fringes to the mainstream.


Leadership ◽  
2019 ◽  
Vol 15 (2) ◽  
pp. 135-151 ◽  
Author(s):  
Hamid Foroughi ◽  
Yiannis Gabriel ◽  
Marianna Fotaki

This essay, and the special issue it introduces, seeks to explore leadership in a post-truth age, focusing in particular on the types of narratives and counter-narratives that characterize it and at times dominate it. We first examine the factors that are often held responsible for the rise of post-truth in politics, including the rise of relativist and postmodernist ideas, dishonest leaders and bullshit artists, the digital revolution and social media, the 2008 economic crisis and collapse of public trust. We develop the idea that different historical periods are characterized by specific narrative ecologies, which, by analogy to natural ecologies, can be viewed as spaces where different types of narrative and counter-narrative emerge, interact, compete, adapt, develop and die. We single out some of the dominant narrative types that characterize post-truth narrative ecologies and highlight the ability of language to ‘do things with words’ that support both the production of ‘fake news’ and a type of narcissistic leadership that thrive in these narrative ecologies. We then examine more widely leadership in post-truth politics focusing on the resurgence of populist and demagogical types along with the narratives that have made these types highly effective in our times. These include nostalgic narratives idealizing a fictional past and conspiracy theories aimed at arousing fears about a dangerous future.


Societies ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 138
Author(s):  
Raluca Buturoiu ◽  
Georgiana Udrea ◽  
Denisa-Adriana Oprea ◽  
Nicoleta Corbu

The current COVID-19 pandemic has been accompanied by the circulation of an unprecedented amount of “polluted” information, especially in the social media environment, among which are false narratives and conspiracy theories about both the pandemic and vaccination against COVID-19. The effects of such questionable information primarily concern the lack of compliance with restrictive measures and a negative attitude towards vaccination campaigns, as well as more complex social effects, such as street protests or distrust in governments and authorities in general. Even though there is a lot of scholarly attention given to these narratives in many countries, research about the profile of people who are more prone to believe or spread them is rather scarce. In this context, we investigate the role of age, compared with other socio-demographic factors (such as education and religiosity), as well as the role of the media (the frequency of news consumption, the perceived usefulness of social media, and the perceived incidence of fake information about the virus in the media) and the critical thinking disposition of people who tend to believe such misleading narratives. To address these issues, we conducted a national survey (N = 945) in April 2021 in Romania. Using a hierarchical OLS regression model, we found that people who perceive higher incidence of fake news (ß = 0.33, p < 0.001), find social media platforms more useful (ß = 0.13, p < 0.001), have lower education (ß = −0.17, p < 0.001), and have higher levels of religiosity (ß = 0.08, p < 0.05) are more prone to believe COVID-19-related misleading narratives. At the same time, the frequency of news consumption (regardless of the type of media), critical thinking disposition, and age do not play a significant role in the profile of the believer in conspiracy theories about the COVID-19 pandemic. Somewhat surprisingly, age does not play a role in predicting belief in conspiracy theories, even though there are studies that suggest that older people are more prone to believe conspiracy narratives. As far as media is concerned, the frequency of news media consumption does not significantly differ for believers and non-believers. We discuss these results within the context of the COVID-19 pandemic.


Author(s):  
Feng Qian ◽  
Chengyue Gong ◽  
Karishma Sharma ◽  
Yan Liu

Fake news on social media is a major challenge and studies have shown that fake news can propagate exponentially quickly in early stages. Therefore, we focus on early detection of fake news, and consider that only news article text is available at the time of detection, since additional information such as user responses and propagation patterns can be obtained only after the news spreads. However, we find historical user responses to previous articles are available and can be treated as soft semantic labels, that enrich the binary label of an article, by providing insights into why the article must be labeled as fake. We propose a novel Two-Level Convolutional Neural Network with User Response Generator (TCNN-URG) where TCNN captures semantic information from article text by representing it at the sentence and word level, and URG learns a generative model of user response to article text from historical user responses which it can use to generate responses to new articles in order to assist fake news detection. We conduct experiments on one available dataset and a larger dataset collected by ourselves. Experimental results show that TCNN-URG outperforms the baselines based on prior approaches that detect fake news from article text alone.


2020 ◽  
Author(s):  
Jonathon McPhetres ◽  
David Gertler Rand ◽  
Gordon Pennycook

A major focus of current research is understanding why people fall for and share fake news on social media. While much research focuses on understanding the role of personality-level traits for those who share the news, such as partisanship and analytic thinking, characteristics of the articles themselves have not been studied. Across two pre-registered studies, we examined whether character deprecation headlines—headlines designed to deprecate someone’s character, but which have no impact on policy or legislation—increased the likelihood of self-reported sharing on social media. In Study 1 we harvested fake news from online sources and compared sharing intentions between Republicans and Democrats. Results showed that, compared to Democrats, Republicans had greater intention to share character-deprecation headlines compared to news with policy implications. We then applied these findings experimentally. In Study 2 we developed a set of fake news that was matched for content across pro-Democratic and pro-Republican headlines and across news focusing on a specific person (e.g., Trump) versus a generic person (e.g., a Republican). We found that, contrary to Study 1, Republicans were no more inclined toward character deprecation than Democrats. However, these findings suggest that while character assassination may be a feature of pro-Republican news, it is not more attractive to Republicans versus Democrats. News with policy implications, whether fake or real, seems consistently more attractive to members of both parties regardless of whether it attempts to deprecate an opponent’s character. Thus, character-deprecation in fake news may in be in supply, but not in demand.


Sign in / Sign up

Export Citation Format

Share Document