scholarly journals Publisher Correction: Neutral bots probe political bias on social media

2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Wen Chen ◽  
Diogo Pacheco ◽  
Kai-Cheng Yang ◽  
Filippo Menczer
Keyword(s):  
2019 ◽  
Author(s):  
Robert M Ross ◽  
David Gertler Rand ◽  
Gordon Pennycook

Why is misleading partisan content believed and shared? An influential account posits that political partisanship pervasively biases reasoning, such that engaging in analytic thinking exacerbates motivated reasoning and, in turn, the acceptance of hyperpartisan content. Alternatively, it may be that susceptibility to hyperpartisan misinformation is explained by a lack of reasoning. Across two studies using different subject pools (total N = 1977), we had participants assess true, false, and hyperpartisan headlines taken from social media. We found no evidence that analytic thinking was associated with increased polarization for either judgments about the accuracy of the headlines or willingness to share the news content on social media. Instead, analytic thinking was broadly associated with an increased capacity to discern between true headlines and either false or hyperpartisan headlines. These results suggest that reasoning typically helps people differentiate between low and high quality news content, rather than facilitating political bias.


2020 ◽  
Vol 17 (167) ◽  
pp. 20200020
Author(s):  
Michele Coscia ◽  
Luca Rossi

Many people view news on social media, yet the production of news items online has come under fire because of the common spreading of misinformation. Social media platforms police their content in various ways. Primarily they rely on crowdsourced ‘flags’: users signal to the platform that a specific news item might be misleading and, if they raise enough of them, the item will be fact-checked. However, real-world data show that the most flagged news sources are also the most popular and—supposedly—reliable ones. In this paper, we show that this phenomenon can be explained by the unreasonable assumptions that current content policing strategies make about how the online social media environment is shaped. The most realistic assumption is that confirmation bias will prevent a user from flagging a news item if they share the same political bias as the news source producing it. We show, via agent-based simulations, that a model reproducing our current understanding of the social media environment will necessarily result in the most neutral and accurate sources receiving most flags.


2020 ◽  
Vol 34 (09) ◽  
pp. 13669-13672 ◽  
Author(s):  
Shan Jiang ◽  
Ronald E. Robertson ◽  
Christo Wilson

Content moderation, the AI-human hybrid process of removing (toxic) content from social media to promote community health, has attracted increasing attention from lawmakers due to allegations of political bias. Hitherto, this allegation has been made based on anecdotes rather than logical reasoning and empirical evidence, which motivates us to audit its validity. In this paper, we first introduce two formal criteria to measure bias (i.e., independence and separation) and their contextual meanings in content moderation, and then use YouTube as a lens to investigate if the political leaning of a video plays a role in the moderation decision for its associated comments. Our results show that when justifiable target variables (e.g., hate speech and extremeness) are controlled with propensity scoring, the likelihood of comment moderation is equal across left- and right-leaning videos.


2020 ◽  
Vol 12 (1) ◽  
pp. 357-375
Author(s):  
Tim Loughran ◽  
Bill McDonald

Textual analysis, implemented at scale, has become an important addition to the methodological toolbox of finance. In this review, given the proliferation of papers now using this method, we first provide an updated survey of the literature while focusing on a few broad topics—social media, political bias, and detecting fraud. We do not attempt to survey the various statistical methods and instead initially focus on the construction and use of lexicons in finance. We then center the discussion on readability as an attribute frequently incorporated in contemporaneous research, arguing that its use begs the question of what we are measuring. Finally, we discuss how the literature might build on the intent of measuring readability to measure something more appropriate and more broadly relevant—complexity.


Author(s):  
Alessandro Miani ◽  
Thomas Hills ◽  
Adrian Bangerter

AbstractThe spread of online conspiracy theories represents a serious threat to society. To understand the content of conspiracies, here we present the language of conspiracy (LOCO) corpus. LOCO is an 88-million-token corpus composed of topic-matched conspiracy (N = 23,937) and mainstream (N = 72,806) documents harvested from 150 websites. Mimicking internet user behavior, documents were identified using Google by crossing a set of seed phrases with a set of websites. LOCO is hierarchically structured, meaning that each document is cross-nested within websites (N = 150) and topics (N = 600, on three different resolutions). A rich set of linguistic features (N = 287) and metadata includes upload date, measures of social media engagement, measures of website popularity, size, and traffic, as well as political bias and factual reporting annotations. We explored LOCO’s features from different perspectives showing that documents track important societal events through time (e.g., Princess Diana’s death, Sandy Hook school shooting, coronavirus outbreaks), while patterns of lexical features (e.g., deception, power, dominance) overlap with those extracted from online social media communities dedicated to conspiracy theories. By computing within-subcorpus cosine similarity, we derived a subset of the most representative conspiracy documents (N = 4,227), which, compared to other conspiracy documents, display prototypical and exaggerated conspiratorial language and are more frequently shared on Facebook. We also show that conspiracy website users navigate to websites via more direct means than mainstream users, suggesting confirmation bias. LOCO and related datasets are freely available at https://osf.io/snpcg/.


2021 ◽  
pp. 146144482110333
Author(s):  
Brian E Weeks ◽  
Ericka Menchen-Trevino ◽  
Christopher Calabrese ◽  
Andreu Casas ◽  
Magdalena Wojcieszak

This study investigates the potential role both untrustworthy and partisan websites play in misinforming audiences by testing whether actual exposure to these sites is associated with political misperceptions. Using a sample of American adult social media users, we match data from individuals’ Internet browser histories with a survey measuring the accuracy of political beliefs. We find that visits to partisan websites are at times related to misperceptions consistent with the political bias of the site. However, we do not find strong evidence that untrustworthy websites consistently relate to false beliefs. There is also little evidence that visits to less partisan, centrist news sites are associated with more accurate political beliefs about these issues, suggesting that exposure to politically neutral news is not necessarily the antidote to misinformation. Results suggest that focusing on partisan news sites—rather than untrustworthy sites—may be fruitful to understanding how media contribute to political misperceptions.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Wen Chen ◽  
Diogo Pacheco ◽  
Kai-Cheng Yang ◽  
Filippo Menczer

AbstractSocial media platforms attempting to curb abuse and misinformation have been accused of political bias. We deploy neutral social bots who start following different news sources on Twitter, and track them to probe distinct biases emerging from platform mechanisms versus user interactions. We find no strong or consistent evidence of political bias in the news feed. Despite this, the news and information to which U.S. Twitter users are exposed depend strongly on the political leaning of their early connections. The interactions of conservative accounts are skewed toward the right, whereas liberal accounts are exposed to moderate content shifting their experience toward the political center. Partisan accounts, especially conservative ones, tend to receive more followers and follow more automated accounts. Conservative accounts also find themselves in denser communities and are exposed to more low-credibility content.


Author(s):  
Vidish Sharma ◽  
Aditya Bendapudi ◽  
Tarun Trehan ◽  
Ashutosh Sharma ◽  
Adwitiya Sinha
Keyword(s):  

2018 ◽  
Vol 22 (1-2) ◽  
pp. 188-227 ◽  
Author(s):  
Juhi Kulshrestha ◽  
Motahhare Eslami ◽  
Johnnatan Messias ◽  
Muhammad Bilal Zafar ◽  
Saptarshi Ghosh ◽  
...  

2021 ◽  
Author(s):  
Samuel Guimarães ◽  
Fabrício Benevenuto

News consumption is increasingly done on social media websites. In this environment, all types of entities and people present themselves as news sources. These new outlets might focus on specific audiences, and some exhibit the news less objectively. Facebook is one of these platforms, which categorizes an extensive group of pages as a kind of news media. To analyze this phenomenon, it is crucial to characterize all pages that disseminate information in this ecosystem. Our main objective is to create an in-depth diagnostic of news stories and opinions, focusing on Brazilian Facebook. Our contributions are: (i) a new method to measure the political bias of Facebook pages on a given country, and (ii) a detailed characterization of a comprehensive sample of these pages.


Sign in / Sign up

Export Citation Format

Share Document