scholarly journals Fighting Deepfakes: Media and Internet Giants’ Converging and Diverging Strategies Against Hi-Tech Misinformation

2021 ◽  
Vol 9 (1) ◽  
pp. 291-300
Author(s):  
Ángel Vizoso ◽  
Martín Vaz-Álvarez ◽  
Xosé López-García

Deepfakes, one of the most novel forms of misinformation, have become a real challenge in the communicative environment due to their spread through online news and social media spaces. Although fake news have existed for centuries, its circulation is now more harmful than ever before, thanks to the ease of its production and dissemination. At this juncture, technological development has led to the emergence of deepfakes, doctored videos, audios or photos that use artificial intelligence. Since its inception in 2017, the tools and algorithms that enable the modification of faces and sounds in audiovisual content have evolved to the point where there are mobile apps and web services that allow average users its manipulation. This research tries to show how three renowned media outlets—<em>The Wall Street Journal</em>,<em> The Washington Post</em>,<em> </em>and<em> Reuters</em>—and three of the biggest Internet-based companies—Google, Facebook, and Twitter—are dealing with the spread of this new form of fake news. Results show that identification of deepfakes is a common practice for both types of organizations. However, while the media is focused on training journalists for its detection, online platforms tended to fund research projects whose objective is to develop or improve media forensics tools.

Author(s):  
Janet Aver Adikpo

Today, the media environment has traversed several phases of technological advancements and as a result, there is a shift in the production and consumption of news. This chapter conceived fake news within the milieu of influencing information spread in the society, especially on the cyberspace. Using the hierarchy of influence model trajectory with fake news, it was established that it has become almost impossible to sustain trust and credibility through individual influences on online news content. The primary reason is that journalists are constrained by professional ethics, organizational routines, and ownership influence. Rather than verify facts and offer supporting claims, online users without professional orientation engage in a reproducing information indiscreetly. The chapter recommends that ethics be reconsidered as a means to recreate and imbibe journalistic values that will contend with the fake news pandemic.


2015 ◽  
Vol 26 (4) ◽  
pp. 481-497 ◽  
Author(s):  
Lauren Feldman ◽  
P. Sol Hart ◽  
Tijana Milosevic

This study examines non-editorial news coverage in leading US newspapers as a source of ideological differences on climate change. A quantitative content analysis compared how the threat of climate change and efficacy for actions to address it were represented in climate change coverage across The New York Times, The Wall Street Journal, The Washington Post, and USA Today between 2006 and 2011. Results show that The Wall Street Journal was least likely to discuss the impacts of and threat posed by climate change and most likely to include negative efficacy information and use conflict and negative economic framing when discussing actions to address climate change. The inclusion of positive efficacy information was similar across newspapers. Also, across all newspapers, climate impacts and actions to address climate change were more likely to be discussed separately than together in the same article. Implications for public engagement and ideological polarization are discussed.


INFORMASI ◽  
2018 ◽  
Vol 48 (1) ◽  
pp. 33
Author(s):  
Fauziah Hassan ◽  
Sofia Hayati Yusoff ◽  
Siti Zobidah Omar

The symbiotic relationship between Islam and media is inevitable. The frequent of media coverage about Islam has been researched by many especially since the remarkable incident of September 11 in 2001. From that moment, Islam has been viewed and labelled as negative by the Western media specifically to the Muslims living in America and those in the Middle East countries. This phenomenon has been contagious to the Muslims in Southeast Asian countries such as Malaysia and Indonesia as these two countries are believed to have a connection with recent terrorist groups such as Islamic State of Iraq and the Levant (ISIL) and Islamic State in Iraq and Syria (ISIS). Therefore, this study was conducted to see the frequent coverage specifically on terrorism issues as reported by the Wall Street Journal (WSJ) and to explore the news themes that emerged in the news reporting. To realize this study, the researchers applied both quantitative and qualitative analysis to analyse the online news articles found in WSJ from year 2012 until 2013. The qualitative software was used in this study namely QSR Nvivo 11 to help and assist the researchers to store, manage and codify the news and also the quantitative software which is SPSS to calculate the frequencies of the reporting. The findings revealed that WSJ reported news related to terrorism in Malaysia and Indonesia frequently but it seems that Indonesia received higher percentage compared to Malaysia. In terms of news themes, the results found four major themes that are very much related to terrorism issues such as terrorist attacks, suspected as terrorist and robbery from both issues happened in Malaysia and Indonesia.


2019 ◽  
Vol 44 (1) ◽  
pp. 24-42 ◽  
Author(s):  
Alon Sela ◽  
Orit Milo ◽  
Eugene Kagan ◽  
Irad Ben-Gal

Purpose The purpose of this paper is to propose a novel method to enhance the spread of messages in social networks by “Spreading Groups.” These sub-structures of highly connected accounts intentionally echo messages between the members of the subgroup at the early stages of a spread. This echoing further boosts the spread to regions substantially larger than the initial region. These spreading accounts can be actual humans or social bots. Design/methodology/approach The paper reveals an interesting anomaly in information cascades in Twitter and proposes the spreading group model that explains this anomaly. The model was tested using an agent-based simulation, real Twitter data and questionnaires. Findings The messages of few anonymous Twitter accounts spread on average more than well-known global financial media groups, such as The Wall Street Journal or Bloomberg. The spreading groups (also sometimes called BotNets) model provides an effective mechanism that can explain these findings. Research limitations/implications Spreading groups are only one possible mechanism that can explain the effectiveness of spread of tweets from lesser known accounts. The implication of this work is in showing how spreading groups can be used as a mechanism to spread messages in social networks. The construction of spreading groups is rather technical and does not require using opinion leaders. Similar to the case of “Fake News,” we expect the topic of spreading groups and their aim to manipulate information to receive growing attention in public discussion. Practical implications While harnessing opinion leaders to spread messages is costly, constructing spreading groups is more technical and replicable. Spreading groups are an efficient method to amplify the spread of message in social networks. Social implications With the blossoming of fake news, one might tend to assess the reliability of news by the number of users involved in its spread. This heuristic might be easily fooled by spreading groups. Furthermore, spreading groups consisting of a blend of human and computerized bots might be hard to detect. They can be used to manipulate financial markets or political campaigns. Originality/value The paper demonstrates an anomaly in Twitter that was not studied before. It proposes a novel approach to spreading messages in social networks. The methods presented in the paper are valuable for anyone interested in spreading messages or an agenda such as political actors or other agenda enthusiasts. While social bots have been widely studied, their synchronization to increase the spread is novel.


2021 ◽  
Vol 9 (4) ◽  
pp. 198-207 ◽  
Author(s):  
Hannes Cools ◽  
Baldwin Van Gorp ◽  
Michaël Opgenhaffen

Newsroom innovation labs have been created over the last ten years to develop algorithmic news recommenders (ANR) that suggest and summarise what news is. Although these ANRs are still in an early stage and have not yet been implemented in the entire newsroom, they have the potential to change how newsworkers fulfil their daily decisions (gatekeeping) and autonomy in setting the agenda (agenda-setting). First, this study focuses on the new dynamics of the ANR and how it potentially influences the newsworkers’ role of gatekeeping within the newsgathering process. Second, this study investigates how the dynamics of an ANR could influence the autonomy of the newsworkers’ role as media agenda setters. In order to advance our understanding of the changing dynamics of gatekeeping and agenda-setting in the newsroom, this study conducts expert interviews with 16 members of newsroom innovation labs of<em> The Washington Post</em>,<em> The Wall Street Journal</em>, <em>Der Spiegel</em>, the BBC, and the Bayerische Rundfunk (BR) radio station. The results show that when newsworkers interact with ANRs, they rely on suggestions and summaries to evaluate what is newsworthy, especially when there is a “news peak” (elections, a worldwide pandemic, etc.). With regard to the agenda-setting role, the newsworker still has full autonomy, but the ANR creates a “positive acceleration effect” on how certain topics are put on the agenda.


2020 ◽  
Vol 1 (1) ◽  
pp. 18-25
Author(s):  
Francesc Fusté-Forné

Food and gastronomy are significant ingredients of everyday leisure and lifestyle practices. Food is part of culture and culture is part of the media. The current research analyzes the mediatization of food in legacy media. Drawing from a quantitative approach, the paper reviews food-based contents in New York City’s newspapers. In particular, AM New York, El Diario, Metro, The New York Times and The Wall Street Journal are studied over a period of 50 days. As a result, a total of 287 articles are analyzed. This research highlights the features of food and gastronomy contents and describes the differences and similarities between traditional newspapers and free dailies. Furthermore, the referent role of The New York Times in communicating food is confirmed.


2018 ◽  
Vol 39 (2) ◽  
pp. 155-168 ◽  
Author(s):  
Kirstie Hettinga ◽  
Alyssa Appelman ◽  
Christopher Otmar ◽  
Alesandria Posada ◽  
Anne Thompson

A content analysis of corrections (N = 507) from four influential newspapers—the New York Times, the Washington Post, The Wall Street Journal and the Los Angeles Times—shows that they correct errors similar to each other in terms of location, type, impact and objectivity. Results are interpreted through democratic theory and are used to suggest ways for copy editors to most effectively proofread and fact-check.


Journalism ◽  
2017 ◽  
Vol 19 (1) ◽  
pp. 75-92 ◽  
Author(s):  
Julia Sonnevend

The article makes a case for foregrounding ‘event’ as a key concept within journalism studies before, during, and after the digital age. The article’s first part presents an overview of the existing research on events in philosophy, sociology, historiography, and journalism studies, arguing that the concept of ‘event’ has not received sufficient attention in journalism studies. The article’s second part demonstrates the need to consider ‘event’ as an essential concept of journalism studies through an empirical case study: the news coverage of the disappeared Malaysian Airlines plane MH370 (8 March 2014) in four American news outlets, The New York Times, The Washington Post, The Wall Street Journal and CNN. This article argues that journalists employed two strategies in their coverage: (1) they created and/or covered what the article calls ‘substitute events’, defined as minor events in the present that journalists perceived as new happenings and that led to further reporting and (2) turned to the past and the future for events in their reporting, extending the scope of coverage from the relatively eventless present. Overall, the case study shows that journalists are limited in their narration by the power of events, and they are eager to construct and cover events, even when events are not readily available.


2019 ◽  
Vol 8 (2) ◽  
pp. 83-87
Author(s):  
Marsel Radikovich Nurkhamitov ◽  
Elena Nikolaevna Zagladina ◽  
Irina Zinov'evna Shakhnina

Abstract The given article is dedicated to consideration of military euphemisms used by the English language print media to describe various conflict-ridden actions in the course of military developments across the globe. Significance of the given research is stipulated by the vivid interest to the matters of euphemy penetrated into all areas of activity, especially in mass media language style. The aim of this paper is to examine the concept and the essence of euphemy and to reveal various military-political euphemisms widely used in press. Methods used to study the subject of the given paper were as follows: theoretical literature study within the given theme, a descriptive method, followed by the method of sampling euphemisms from Anglophone print media. The main result of the present study appears to be the finding, that euphemization presents a significant process of enormous importance in communication. The usage of euphemistical examples in the contemporary English-speaking press, namely, the New York Times, the Sun, the Telegraph, the Wall Street Journal and the Washington Post served as the main data for the given research.


2017 ◽  
Author(s):  
◽  
Hayden Robert Lewis

This research analyzes coverage of major artificial intelligence events representing the thematic concept of "man versus machine." Rooted in grounded theory and rhetorical criticism, this research applies symbolic convergence theory and fantasy theme analysis to reporting from The New York Times, The Wall Street Journal and The Washington Post immediately surrounding three cultural and scientific milestones in the development of artificial intelligence technology: IBM Deep Blue's 1997 defeat of chess grandmaster Garry Kasparov; IBM Watson's 2011 defeat of Jeopardy! champions Ken Jennings and Brad Rutter; and Google DeepMind AlphaGo's 2016 defeat of Lee Sedol. This research analyzes how symbolic realities are dramatized in the context of these events such that the competitions themselves represent ideological battles between humanism or technological superiority. This research also demonstrates subtle variations in how fantasy themes and rhetorical visions manifest in coverage from each outlet, amounting to what is effectively a competition for shared consciousness between these two competing ideological constructs.


Sign in / Sign up

Export Citation Format

Share Document