“Ridiculous and Untrue – FAKE NEWS!”

Author(s):  
Anna Grazulis ◽  
Ryan Rogers

Beyond the spread of fake news, the term “fake news” has been used by people on social media and by people in the Trump administration to discredit reporting and show disagreement with the content of a story. This study offers a series of defining traits of fake news and a corresponding experiment testing its impact. Overall, this study shows that fake news, or at least labeling fake news can impact the gratifications people derive from news. Further, this study provides evidence that the impact of fake news might, in some cases, rely on whether or not the fake news complies with preexisting beliefs.

Author(s):  
Anna Grazulis ◽  
Ryan Rogers

Beyond the spread of fake news, the term “fake news” has been used by people on social media and by people in the Trump administration to discredit reporting and show disagreement with the content of a story. This study offers a series of defining traits of fake news and a corresponding experiment testing its impact. Overall, this study shows that fake news, or at least labeling fake news can impact the gratifications people derive from news. Further, this study provides evidence that the impact of fake news might, in some cases, rely on whether or not the fake news complies with preexisting beliefs.


2018 ◽  
Author(s):  
Andrea Pereira ◽  
Jay Joseph Van Bavel ◽  
Elizabeth Ann Harris

Political misinformation, often called “fake news”, represents a threat to our democracies because it impedes citizens from being appropriately informed. Evidence suggests that fake news spreads more rapidly than real news—especially when it contains political content. The present article tests three competing theoretical accounts that have been proposed to explain the rise and spread of political (fake) news: (1) the ideology hypothesis— people prefer news that bolsters their values and worldviews; (2) the confirmation bias hypothesis—people prefer news that fits their pre-existing stereotypical knowledge; and (3) the political identity hypothesis—people prefer news that allows their political in-group to fulfill certain social goals. We conducted three experiments in which American participants read news that concerned behaviors perpetrated by their political in-group or out-group and measured the extent to which they believed the news (Exp. 1, Exp. 2, Exp. 3), and were willing to share the news on social media (Exp. 2 and 3). Results revealed that Democrats and Republicans were both more likely to believe news about the value-upholding behavior of their in-group or the value-undermining behavior of their out-group, supporting a political identity hypothesis. However, although belief was positively correlated with willingness to share on social media in all conditions, we also found that Republicans were more likely to believe and want to share apolitical fake new. We discuss the implications for theoretical explanations of political beliefs and application of these concepts in in polarized political system.


2021 ◽  
Vol 7 (2) ◽  
pp. 205630512110197
Author(s):  
Chesca Ka Po Wong ◽  
Runping Zhu ◽  
Richard Krever ◽  
Alfred Siu Choi

While the impact of fake news on viewers, particularly marginalized media users, has been a cause of growing concern, there has been little attention paid to the phenomenon of deliberately “manipulated” news published on social media by mainstream news publishers. Using qualitative content analysis and quantitative survey research, this study showed that consciously biased animated news videos released in the midst of the Umbrella Movement protests in Hong Kong impacted on both the attitudes of students and their participation in the protests. The findings raise concerns over potential use of the format by media owners to promote their preferred ideologies.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Khudejah Ali ◽  
Cong Li ◽  
Khawaja Zain-ul-abdin ◽  
Muhammad Adeel Zaffar

PurposeAs the epidemic of online fake news is causing major concerns in contexts such as politics and public health, the current study aimed to elucidate the effect of certain “heuristic cues,” or key contextual features, which may increase belief in the credibility and the subsequent sharing of online fake news.Design/methodology/approachThis study employed a 2 (news veracity: real vs fake) × 2 (social endorsements: low Facebook “likes” vs high Facebook “likes”) between-subjects experimental design (N = 239).FindingsThe analysis revealed that a high number of Facebook “likes” accompanying fake news increased the perceived credibility of the material compared to a low number of “likes.” In addition, the mediation results indicated that increased perceptions of news credibility may create a situation in which readers feel that it is necessary to cognitively elaborate on the information present in the news, and this active processing finally leads to sharing.Practical implicationsThe results from this study help explicate what drives increased belief and sharing of fake news and can aid in refining interventions aimed at combating fake news for both communities and organizations.Originality/valueThe current study expands upon existing literature, linking the use of social endorsements to perceived credibility of fake news and information, and sheds light on the causal mechanisms through which people make the decision to share news articles on social media.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mateusz Szczepański ◽  
Marek Pawlicki ◽  
Rafał Kozik ◽  
Michał Choraś

AbstractThe ubiquity of social media and their deep integration in the contemporary society has granted new ways to interact, exchange information, form groups, or earn money—all on a scale never seen before. Those possibilities paired with the widespread popularity contribute to the level of impact that social media display. Unfortunately, the benefits brought by them come at a cost. Social Media can be employed by various entities to spread disinformation—so called ‘Fake News’, either to make a profit or influence the behaviour of the society. To reduce the impact and spread of Fake News, a diverse array of countermeasures were devised. These include linguistic-based approaches, which often utilise Natural Language Processing (NLP) and Deep Learning (DL). However, as the latest advancements in the Artificial Intelligence (AI) domain show, the model’s high performance is no longer enough. The explainability of the system’s decision is equally crucial in real-life scenarios. Therefore, the objective of this paper is to present a novel explainability approach in BERT-based fake news detectors. This approach does not require extensive changes to the system and can be attached as an extension for operating detectors. For this purposes, two Explainable Artificial Intelligence (xAI) techniques, Local Interpretable Model-Agnostic Explanations (LIME) and Anchors, will be used and evaluated on fake news data, i.e., short pieces of text forming tweets or headlines. This focus of this paper is on the explainability approach for fake news detectors, as the detectors themselves were part of previous works of the authors.


10.28945/4154 ◽  
2019 ◽  

Aim/Purpose: The proliferation of fake news through social media threatens to undercut the possibility of ascertaining facts and truth. This paper explores the use of ancient rhetorical tools to identify fake news generally and to see through the misinformation juggernaut of President Donald Trump. Background: The ancient rhetorical appeals described in Aristotle’s Rhetoric—ethos (character of the speaker), pathos (nature of the audience) and logos (message itself)—might be a simple, yet profound fix for the era of fake news. Also known as the rhetorical triangle and used as an aid for effective public speaking by the ancient Greeks, the three appeals can also be utilized for analyzing the main components of discourse. Methodology: Discourse analysis utilizes insights from rhetoric, linguistics, philosophy and anthropology in in order to interpret written and spoken texts. Contribution This paper analyzes Donald Trump’s effective use of Twitter and campaign rallies to create and sustain fake news. Findings: At the point of the writing of this paper, the Washington Post Trump Fact Checker has identified over 10,000 untruths uttered by the president in his first two years of office, for an average of eight untruths per day. In addition, analysis demonstrates that Trump leans heavily on ethos and pathos, almost to the exclusion of logos in his tweets and campaign rallies, making spectacular claims, which seem calculated to arouse emotions and move his base to action. Further, Trump relies heavily on epideictic rhetoric (praising and blaming), excluding forensic (legal) and deliberative rhetoric, which the ancients used for sustained arguments about the past or deliberations about the future of the state. In short, the analysis uncovers how and ostensibly why Trump creates and sustains fake news while claiming that other traditional news outlets, except for FOX news, are the actual purveyors of fake news. Recommendations for Practitioners: Information systems and communication practitioners need to be aware of the ways in which the systems they create and monitor are vulnerable to targeted attacks of the purveyors of fake news. Recommendation for Researchers: Further research on the identification and proliferation of fake news from a variety of disciplines is needed, in order to stem the flow of misinformation and untruths through social media. Impact on Society: The impact of fake news is largely unknown and needs to be better understood, especially during election cycles. Some researchers believe that social media constitute a fifth estate in the United States, challenging the authority of the three branches of government and the traditional press. Future Research: As noted above, further research on the identification and proliferation of fake news from a variety of disciplines is needed, in order to stem the flow of misinformation and untruths through social media.


2018 ◽  
Vol 69 (4) ◽  
pp. 513-530
Author(s):  
Paul Bernal

The current ‘fake news’ phenomenon is a modern manifestation of something that has existed throughout history. The difference between what happens now and what has happened before is driven by the nature of the internet and social media – and Facebook in particular. Three key strands of Facebook’s business model – invading privacy to profile individuals, analysing mass data to profile groups, then algorithmically curating content and targeting individuals and groups for advertising – create a perfect environment for fake news. Proposals to ‘deal’ with fake news either focus on symptoms or embed us further in the algorithms that create the problem. Whilst we embrace social media, particularly as a route to news, there is little that can be done to reduce the impact of fake news and misinformation. The question is whether the benefits to freedom of expression that social media brings mean that this is a price worth paying.


Author(s):  
Antonio Badia

The recent controversy over ‘fake news' reminds us of one of the main problems on the web today: the utilization of social media and other outlets to distribute false and misleading content. The impact of this problem is very significant. This article discusses the issue of fake content on the web. First, it defines the problem and shows that, in many cases, it is surprisingly hard to establish when a piece of news is untrue. It distinguishes the issue of fake content from issues of hate/offensive speech (while there is a relation, the issues involved are a bit different). It then overviews proposed solutions to the problem of fake content detection, both algorithmic and human. On the algorithmic side, it focuses on work on classifiers. The chapter shows that most algorithmic approaches have significant issues, which has led to reliance on the human approach in spite of its known limitations (subjectivity, difficulty to scale). Finally, it closes with a discussion of potential future work.


2020 ◽  
Vol 6 (2) ◽  
pp. 205630512092851
Author(s):  
Megan Ward

Vigilante groups in the United States and India have used social media to distribute their content and publicize violent spectacles for political purposes. This essay will tackle the spectacle of vigilante lynchings, abduction, and threats as images of vigilante violence are spread online in support of specific candidates, state violences, and election discourse. It is important to understand the impact of not only these vigilante groups, but understand the communicative spectacle of their content. Using Leo R. Chavez’s understanding of early 2000s vigilante action as spectacle in service of social movements, this essay extends the analysis to modern vigilante violence online content used as dramatic political rhetoric in support of sitting administrations. Two case studies on modern vigilante violence provide insight into this phenomenon are as follows: (1) Vigilante nativist militia groups across the United States in support of border militarization have kidnapped migrants in the Southwest desert, documenting these incidents to show support for the Trump Administration and building of a border wall and (2) vigilante mobs in India have circulated videos and media documenting lynchings of so-called “cow killers”; these attacks target Muslims in the light of growing Hindu Nationalist sentiment and political movement in the country. Localized disinformation and personal video allow vigilante content to spread across social media to recruit members for militias, as well as incite quick acts of mob violence. Furthermore, these case studies display how the social media livestreams and video allow representations of violence to become attention-arresting visual acts of political discourse.


2019 ◽  
Author(s):  
Gema Revuelta

This article analyses specialist journalists’ perception of transformations in public communication on health and biomedicine in Spain over the last two decades. A total of 20 semi-structured interviews were carried out. The analysis uses the metaphorical concept of “ecosystem”. According to the interviewees, the main “environmental” changes are technological (stressing the expansion and diversity of online information and the impact of social media). They perceive a multiplication and diversification among “information source-species”. Among these, the visibility of specialist sources (researchers and healthcare professionals) and civil associations (patients and consumers) has increased, but “opportunistic species”, such as promoters of fake news and pseudo-medicine, have also emerged. Health journalists rate their profession satisfactorily, while recognising that their working “environment” has deteriorated and perceiving a threat in the dependence on clickbait and social media positioning.


Sign in / Sign up

Export Citation Format

Share Document