Deepfakes, misinformation and disinformation and authenticity infrastructure responses: Impacts on frontline witnessing, distant witnessing, and civic journalism

Journalism ◽  
2021 ◽  
pp. 146488492110606
Author(s):  
Sam Gregory

Frontline witnessing and civic journalism are impacted by the rhetoric and the reality of misinformation and disinformation. This essay highlights key insights from activities of the human rights and civic journalism network WITNESS, as they seek to prepare for new forms of media manipulation, such as deepfakes, and to ensure that an emergent “authenticity infrastructure” is in place to respond to global needs for reliable information without creating additional harms. Based on global consultations on perceived threats and prioritized solutions, their efforts are primarily targeted towards synthetic media and deepfakes, which not only facilitate audiovisual falsification (including non-consensual sexual images) but also, by being embedded in societal dynamics of surveillance and civil society suppression, they challenge real footage and so undermine the credibility of civic media and frontline witnessing (also known as “liar’s dividend”). They do this within a global context where journalists and some distant witness investigators self-identify as lacking relevant skills and capacity, and face inequity in access to detection technologies. Within this context, “authenticity infrastructure” tracks media provenance, integrity, and manipulation from camera to edit to distribution, and so comes to provide “verification subsidies” that enable distant witnesses to properly interpret eye-witness footage. This “authenticity infrastructure” and related tools are rapidly moving from niche to mainstream in the form of initiatives the Content Authenticity Initiative and Coalition for Content Authenticity and Provenance, raising key questions about who participates in the production and dissemination of audiovisual information, under what circumstances and to which effect for whom. Provenance risks being weaponized unless key concerns are integrated into infrastructure proposals and implementation. Data may be used against vulnerable witnesses, or the absence of a trail, for legitimate privacy and technological access reasons, used to undermine credibility. Regulatory and extra-legal co-option are also a fear as securitized “fake news” laws proliferate. The investigation of both phenomena, deepfakes and emergent authenticity infrastructure(s), this paper argues, is important as it highlights the risks related  both to the “information disorder” of deepfakes as they challenge the credibility and safety of frontline witnesses  and to responses to such “disorder,” as they risk worsening inequities in access to tools for mitigation or increasing exposure to harms from technology infrastructure.

2018 ◽  
Vol 50 ◽  
pp. 01127
Author(s):  
Aleksandr Pastukhov

The paper reflects important features and developments of doping affair with Russian sportsmen as a media scandal. This communicative event is introduced through the current examples taken from the German national and regional press. The mechanisms of the formation and topicalization of the event are revealed in the paper. The global context of the scandal is covered and exampled by co-referential areas “Sport” and “Olympics”. Their presentation and interpretation occur under conditions of so-called “fake news” and “media performance” strategies. The examples presented in chronological order reflect the communicative dynamics of the media event ‘doping scandal’. The remarkable features of the distinguishing journalistic style and informative media genres are covered in the paper.


2020 ◽  
Vol 97 (2) ◽  
pp. 435-452
Author(s):  
Jason Vincent A. Cabañes

To nuance current understandings of the proliferation of digital disinformation, this article seeks to develop an approach that emphasizes the imaginative dimension of this communication phenomenon. Anchored on ideas about the sociality of communication, this piece conceptualizes how fake news and political trolling online work in relation to particular shared understandings people have of their socio-political landscape. It offers the possibility of expanding the information-oriented approach to communication taken by many journalistic interventions against digital disinformation. It particularly opens up alternatives to the problematic strategy of challenging social media manipulation solely by doubling down on objectivity and facts.


In medias res ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2987-3008
Author(s):  
Marko Grba

This paper traces the course of the ongoing pandemic as it was reported in some of the established world media as well as in scientific journals. The author has been following the various sources since practically the begining of the pandemic in Europe and here will try to assess the role and the actual practice of scientists, politicians and other actors throughout the pandemic, from its begining in China at the close of 2019 till end of February 2021. The key questions addressed in this paper are: Why the events of the ongoing pandemic unfolded as they did, with so many misguided decisions by politicians (as well as experts at times), with so much misinformation and fake news and so many missed opportunities for decisive and life-changing action? What is the reason behind prolonged intervals of silence in the communication chain? And what cost the insufficient familiarity with science – its facts, methods or means of communication – in the time of global pandemic? The main thesis is that the insufficient level of scientific knowledge – and at times of basic scientific litteracy – as witnessed from the highest places of political power to the so called conspiracy theorists, costed us all too many lives lost and an unforseeable suffering to come. The responsibility is shared between virtually all actors and it must be given due consideration, in some cases even at the courts of justice, if we are to learn all the valuable lessons for the future of public health, world economy and, indeed, the survival of humanity.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajshree Varma ◽  
Yugandhara Verma ◽  
Priya Vijayvargiya ◽  
Prathamesh P. Churi

PurposeThe rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news channels, freelance reporters and websites. Amid the coronavirus disease 2019 (COVID-19) pandemic, individuals are inflicted with these false and potentially harmful claims and stories, which may harm the vaccination process. Psychological studies reveal that the human ability to detect deception is only slightly better than chance; therefore, there is a growing need for serious consideration for developing automated strategies to combat fake news that traverses these platforms at an alarming rate. This paper systematically reviews the existing fake news detection technologies by exploring various machine learning and deep learning techniques pre- and post-pandemic, which has never been done before to the best of the authors’ knowledge.Design/methodology/approachThe detailed literature review on fake news detection is divided into three major parts. The authors searched papers no later than 2017 on fake news detection approaches on deep learning and machine learning. The papers were initially searched through the Google scholar platform, and they have been scrutinized for quality. The authors kept “Scopus” and “Web of Science” as quality indexing parameters. All research gaps and available databases, data pre-processing, feature extraction techniques and evaluation methods for current fake news detection technologies have been explored, illustrating them using tables, charts and trees.FindingsThe paper is dissected into two approaches, namely machine learning and deep learning, to present a better understanding and a clear objective. Next, the authors present a viewpoint on which approach is better and future research trends, issues and challenges for researchers, given the relevance and urgency of a detailed and thorough analysis of existing models. This paper also delves into fake new detection during COVID-19, and it can be inferred that research and modeling are shifting toward the use of ensemble approaches.Originality/valueThe study also identifies several novel automated web-based approaches used by researchers to assess the validity of pandemic news that have proven to be successful, although currently reported accuracy has not yet reached consistent levels in the real world.


Author(s):  
Gabriele de Seta

In China, deepfakes are commonly known as huanlian, which literally means “changing faces.” Huanlian content, including face-swapped images and video reenactments, has been circulating in China since at least 2018, at first through amateur users experimenting with machine learning models and then through the popularization of audiovisual synthesis technologies offered by digital platforms. Informed by a wealth of interdisciplinary research on media manipulation, this article aims at historicizing, contextualizing, and disaggregating huanlian in order to understand how synthetic media is domesticated in China. After briefly summarizing the global emergence of deepfakes and the local history of huanlian, I discuss three specific aspects of their development: the launch of the ZAO app in 2019 with its societal backlash and regulatory response; the commercialization of deepfakes across formal and informal markets; and the communities of practice emerging around audiovisual synthesis on platforms like Bilibili. Drawing on these three cases, the conclusion argues for the importance of situating specific applications of deep learning in their local contexts.


2021 ◽  
pp. 204388692199906
Author(s):  
Mary C Lacity

This teaching case explores the advantages and disadvantages of battling fake news with advanced information technologies, such as artificial intelligence (AI) and blockchains. Students will explore the purposes of, proliferation of, susceptibility to, and consequences of fake news and assess the efficacy of new interventions that rely on emerging technologies. Key questions students will explore: How can we properly balance freedom of speech and the prevention of fake news? What ethical guidelines should be applied to the use of AI and blockchains to ensure they do more good than harm? Will technology be enough to stop fake news?


2021 ◽  
Vol 64 (2 (246)) ◽  
pp. 127-130
Author(s):  
Magdalena Hodalska

Craig Silverman (ed.): Verification Handbook: For Disinformation and Media Manipulation, European Journalism Centre 2020, 151 pages.


Author(s):  
Richard Rogers ◽  
Sabine Niederer

This chapter gives an overview of the contemporary scholarship surrounding ‘fake news’. It discusses how the term has been deployed politically as a barb against the free press when publishing inconvenient truths since the mid-nineteenth century. It also addresses how such notions have been used in reaction to novel publishing practices, including to the current social media platforms. More generally, the scholarship could be divided into waves, whereby the first related to the definitional issues and the production side, whilst the second has been concerned with its consumption, including the question of persuasion. There is additionally interest in solutions, including the critique of the idea that automation effectively addresses the problems. It concludes with research strategies for the study of the pervasiveness of problematic information across the internet.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Glenn Anderau

Abstract Fake news is a worrying phenomenon which is growing increasingly widespread, partly because of the ease with which it is disseminated online. Combating the spread of fake news requires a clear understanding of the nature of fake news. However, the use of the term in everyday language is heterogenous and has no fixed meaning. Despite increasing philosophical attention to the topic, there is no consensus on the correct definition of “fake news” within philosophy either. This paper aims to bring clarity to the philosophical debate of fake news in two ways: Firstly, by providing an overview of existing philosophical definitions and secondly, by developing a new account of fake news. This paper will identify where there is agreement within the philosophical debate of definitions of “fake news” and isolate four key questions on which there is genuine disagreement. These concern the intentionality underlying fake news, its truth value, the question of whether fake news needs to reach a minimum audience, and the question of whether an account of fake news needs to be dynamic. By answering these four questions, I provide a novel account of defining “fake news”. This new definition hinges upon the fact that fake news has the function of being deliberately misleading about its own status as news.


Sign in / Sign up

Export Citation Format

Share Document