Medien und Wahrheit

2021 ◽  

This volume discusses media ethics perspectives on truth in the context of digitalisation, while also addressing loss of trust and interpretations of the truth in public communication. It develops theoretical classifications of ‘fake news’ and disinformation from both a sociological and a media philosophy perspective. Empirical investigations and case studies on manipulation focus on image editing and communication strategies in political debates. Moreover, the book presents problems and solutions in relation to disinformation from a journalistic perspective. The volume concludes by examining normative challenges posed by online communication in the context of both machine learning and on Twitter and YouTube. With contributions by Sybille Krämer, Simone Dietz, Günter Bentele, Charles M. Ess, Ingrid Stapf, Nikil Mukerji, Tilman Bechthold-Hengelhaupt, Christian Filk, Jan-Hinnerk Freytag, Christian Schicha, Olaf Hoffjann, Natalie Ryba, Ole Kelm, Marco Dohle, Saskia Sell, Bernd Oswald, Tobias Eberwein, Tanjev Schultz, Thomas Zeilinger, Markus Kaiser, Hektor Haarkötter, Christian Riess, Lisa Schwaiger, Mark Eisenegger and Michael Litschka.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajshree Varma ◽  
Yugandhara Verma ◽  
Priya Vijayvargiya ◽  
Prathamesh P. Churi

PurposeThe rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news channels, freelance reporters and websites. Amid the coronavirus disease 2019 (COVID-19) pandemic, individuals are inflicted with these false and potentially harmful claims and stories, which may harm the vaccination process. Psychological studies reveal that the human ability to detect deception is only slightly better than chance; therefore, there is a growing need for serious consideration for developing automated strategies to combat fake news that traverses these platforms at an alarming rate. This paper systematically reviews the existing fake news detection technologies by exploring various machine learning and deep learning techniques pre- and post-pandemic, which has never been done before to the best of the authors’ knowledge.Design/methodology/approachThe detailed literature review on fake news detection is divided into three major parts. The authors searched papers no later than 2017 on fake news detection approaches on deep learning and machine learning. The papers were initially searched through the Google scholar platform, and they have been scrutinized for quality. The authors kept “Scopus” and “Web of Science” as quality indexing parameters. All research gaps and available databases, data pre-processing, feature extraction techniques and evaluation methods for current fake news detection technologies have been explored, illustrating them using tables, charts and trees.FindingsThe paper is dissected into two approaches, namely machine learning and deep learning, to present a better understanding and a clear objective. Next, the authors present a viewpoint on which approach is better and future research trends, issues and challenges for researchers, given the relevance and urgency of a detailed and thorough analysis of existing models. This paper also delves into fake new detection during COVID-19, and it can be inferred that research and modeling are shifting toward the use of ensemble approaches.Originality/valueThe study also identifies several novel automated web-based approaches used by researchers to assess the validity of pandemic news that have proven to be successful, although currently reported accuracy has not yet reached consistent levels in the real world.


2020 ◽  
Vol 90 ◽  
pp. 26-41
Author(s):  
Nurul Hidayatul Ummah ◽  
Muchamad Sholakhuddin Al Fajri

This study examined communication strategies used by Indonesian national movements to teach media information literacy as an endeavour to fight hoaxes. The in-depth online interview and content analysis had been employed to investigate approaches of the movement to campaign media information literacy competence to society. The findings reveal that the offline communication has been massively applied by the movement to establish direct engagement to give a comprehensive understanding. Moreover, in the case of online communication, the content of Instagram shows that the movements predominantly use Instagram for marketing tools which post much information related to offline activities, instead of educated contents that contain media information literacy understanding. This study suggests that educating media information literacy through online communication should be prioritised, as 140 million Indonesians are active social media users, dominated by the youth aged 18-24 who are prone to be attacked by fake news.


Author(s):  
V.T Priyanga ◽  
J.P Sanjanasri ◽  
Vijay Krishna Menon ◽  
E.A Gopalakrishnan ◽  
K.P Soman

The widespread use of social media like Facebook, Twitter, Whatsapp, etc. has changed the way News is created and published; accessing news has become easy and inexpensive. However, the scale of usage and inability to moderate the content has made social media, a breeding ground for the circulation of fake news. Fake news is deliberately created either to increase the readership or disrupt the order in the society for political and commercial benefits. It is of paramount importance to identify and filter out fake news especially in democratic societies. Most existing methods for detecting fake news involve traditional supervised machine learning which has been quite ineffective. In this paper, we are analyzing word embedding features that can tell apart fake news from true news. We use the LIAR and ISOT data set. We churn out highly correlated news data from the entire data set by using cosine similarity and other such metrices, in order to distinguish their domains based on central topics. We then employ auto-encoders to detect and differentiate between true and fake news while also exploring their separability through network analysis.


Author(s):  
Giandomenico Di Domenico ◽  
Annamaria Tuan ◽  
Marco Visentin

AbstractIn the wake of the COVID-19 pandemic, unprecedent amounts of fake news and hoax spread on social media. In particular, conspiracy theories argued on the effect of specific new technologies like 5G and misinformation tarnished the reputation of brands like Huawei. Language plays a crucial role in understanding the motivational determinants of social media users in sharing misinformation, as people extract meaning from information based on their discursive resources and their skillset. In this paper, we analyze textual and non-textual cues from a panel of 4923 tweets containing the hashtags #5G and #Huawei during the first week of May 2020, when several countries were still adopting lockdown measures, to determine whether or not a tweet is retweeted and, if so, how much it is retweeted. Overall, through traditional logistic regression and machine learning, we found different effects of the textual and non-textual cues on the retweeting of a tweet and on its ability to accumulate retweets. In particular, the presence of misinformation plays an interesting role in spreading the tweet on the network. More importantly, the relative influence of the cues suggests that Twitter users actually read a tweet but not necessarily they understand or critically evaluate it before deciding to share it on the social media platform.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 556
Author(s):  
Thaer Thaher ◽  
Mahmoud Saheb ◽  
Hamza Turabieh ◽  
Hamouda Chantar

Fake or false information on social media platforms is a significant challenge that leads to deliberately misleading users due to the inclusion of rumors, propaganda, or deceptive information about a person, organization, or service. Twitter is one of the most widely used social media platforms, especially in the Arab region, where the number of users is steadily increasing, accompanied by an increase in the rate of fake news. This drew the attention of researchers to provide a safe online environment free of misleading information. This paper aims to propose a smart classification model for the early detection of fake news in Arabic tweets utilizing Natural Language Processing (NLP) techniques, Machine Learning (ML) models, and Harris Hawks Optimizer (HHO) as a wrapper-based feature selection approach. Arabic Twitter corpus composed of 1862 previously annotated tweets was utilized by this research to assess the efficiency of the proposed model. The Bag of Words (BoW) model is utilized using different term-weighting schemes for feature extraction. Eight well-known learning algorithms are investigated with varying combinations of features, including user-profile, content-based, and words-features. Reported results showed that the Logistic Regression (LR) with Term Frequency-Inverse Document Frequency (TF-IDF) model scores the best rank. Moreover, feature selection based on the binary HHO algorithm plays a vital role in reducing dimensionality, thereby enhancing the learning model’s performance for fake news detection. Interestingly, the proposed BHHO-LR model can yield a better enhancement of 5% compared with previous works on the same dataset.


2021 ◽  
Author(s):  
Lamya Alderywsh ◽  
Aseel Aldawood ◽  
Ashwag Alasmari ◽  
Farah Aldeijy ◽  
Ghadah Alqubisy ◽  
...  

BACKGROUND There is a serious threat from fake news spreading in technologically advanced societies, including those in the Arab world, via deceptive machine-generated text. In the last decade, Arabic fake news identification has gained increased attention, and numerous detection approaches have revealed some ability to find fake news throughout various data sources. Nevertheless, many existing approaches overlook recent advancements in fake news detection, explicitly to incorporate machine learning algorithms system. OBJECTIVE Tebyan project aims to address the problem of fake news by developing a fake news detection system that employs machine learning algorithms to detect whether the news is fake or real in the context of Arab world. METHODS The project went through numerous phases using an iterative methodology to develop the system. This study analysis incorporated numerous stages using an iterative method to develop the system of misinformation and contextualize fake news regarding society's information. It consists of implementing the machine learning algorithms system using Python to collect genuine and fake news datasets. The study also assesses how information-exchanging behaviors can minimize and find the optimal source of authentication of the emergent news through system testing approaches. RESULTS The study revealed that the main deliverable of this project is the Tebyan system in the community, which allows the user to ensure the credibility of news in Arabic newspapers. It showed that the SVM classifier, on average, exhibited the highest performance results, resulting in 90% in every performance measure of sources. Moreover, the results indicate the second-best algorithm is the linear SVC since it resulted in 90% in performance measure with the societies' typical type of fake information. CONCLUSIONS The study concludes that conducting a system with machine learning algorithms using Python programming language allows the rapid measures of the users' perception to comment and rate the credibility result and subscribing to news email services.


Sign in / Sign up

Export Citation Format

Share Document