scholarly journals Automated Fake News Detection in the Age of Digital Libraries

2020 ◽  
Vol 39 (4) ◽  
Author(s):  
Uğur Mertoğlu ◽  
Burkay Genç

The transformation of printed media into digital environment and the extensive use of social media have changed the concept of media literacy and people’s habit of consuming news. While this faster, easier, and comparatively cheaper opportunity offers convenience in terms of people's access to information, it comes with a certain significant problem: Fake News. Due to the free production and consumption of large amounts of data, fact-checking systems powered by human efforts are not enough to question the credibility of the information provided, or to prevent its rapid dissemination like a virus. Libraries, known as sources of trusted information for ages, are facing with the problem because of this difficulty. Considering that libraries are undergoing digitisation processes all over the world and providing digital media to their users, it is very likely that unchecked digital content will be served by world’s libraries. The solution is to develop automated mechanisms that can check the credibility of digital content served in libraries without manual validation. For this purpose, we developed an automated fake news detection system based on the Turkish digital news content. Our approach can be modified for any other language if there is labelled training material. The developed model can be integrated into libraries’ digital systems to label served news content as potentially fake whenever necessary, preventing uncontrolled falsehood dissemination via libraries.

The uncontrollable spread of fake news through the net is irresistible in this globalization era. Fake news dissemination cannot be tolerated as the bad impacts of it to the society is really worrying. Furthermore, this will lead to more significant problems and potential threat such as confusion, misconceptions, slandering and luring users to share provocative lies made from fabricated news through their social media to occur. Within Malaysia context, there is lack in platform for fake news detection in Malay language articles and most of Malaysians received news through their social messaging applications. Fake news can be certainly solved by the aid of artificial intelligence which includes machine learning algorithms. The objective of this project is to propose a fake news detection model using Logistic Regression, to evaluate the performance of Logistic Regression as fake news detection model and to develop a web application that allows entry of a news content or news URL. In this study, Logistic Regression was applied in detecting fake news. Model development methodology is referenced and followed in this project. Based on existing studies, Logistic Regression showed a good performance in classification task. In addition, stancedetection approach is added to improve the accuracy of the model performance. Based on analysis made, this model within stance detection approach yields an excellent accuracy using TF-IDF feature in constructing this fake news model. This model is then integrated with web service that accepts input either news URL or news content in text which is then checked for its truth level through “FAKEBUSTER” application.


Res Rhetorica ◽  
2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Isabel Morales-Sánchez ◽  
Juan Pedro Martin-Villarreal

This article analyzes the rhetorical strategies involved in the spread of texts created in a digital context. The Internet has initiated a new communicative environment which seeks to shape the contents and circumstances of dissemination of online news and electronic literature. The digital medium affects journalism and literature with a series of rhetorical strategies aimed at persuading the audience to double click (automated interactions, clickbait, trending). These rhetorical strategies are not accepted as valid in conventional media and publishing, however they promote rapid dissemination of digital news, as well as reconfi gure the existing relationships between authors and readers in literary works. Our aim is to explain how the dissemination of these texts can be understood from a rhetorical viewpoint, no matter how much the spread of fake news or the radical change in the electronic literary works can be criticized. We point to the consequences of a communicative context that prioritizes immediacy, anonymity and content democratization. Analyzing selected examples from the Spanish (social) media context will demonstrate how double-click rhetoric relates to fictionalization and backgrounding of ethos.


Author(s):  
Ujwal Patil ◽  
Prof. P. M. Chouragade

<p>The technological advancements and a qualitative improvement in the field of artificial intelligence and deep learning leads to the creation of realistic-looking but phoney digital content known as deepfakes .These manipulated videos can quickly be shared via social media to spread fake news or disinformation which not only impacts those who are deceived it also harms social media sites by diminishing faith.These deepfake videos cannot be checked since there are no regulatory mechanisms in place .As a result these untrustworthy outlets will post whatever they wish causing confusion in society in some ways.Current solutions are unable to provide digital media history tracing and authentication it is essential to develop successful methods for detecting deepfake video as a result it is necessary to determine the source or origin of such deepfake footage.That’s why we are implementing blockchain techniques to trace back and determine the origin of digital media blockchain techniques helps in the effective recognition of deepfake video and calculating the trust factor of user.</p>


CCIT Journal ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 20-31
Author(s):  
Untung Rahardja ◽  
Ani Wulandari ◽  
Marviola Hardini

Digital content is content in various formats, whether written, image, video, audio or combination so that it can be read, displayed or played by a computer and easily sent or hared through digital media. Digital content has abundant benefits, especially in the field of promotion. Where when a place of business or a body wants to introduce a product or service that is owned, it definitely requires content such as images as a promotional media. However, if you have to distribute posters to everyone you meet, it is not in line with current technological advancements because you are still using a conventional process. Therefore, to overcome this problem, social media can be used to process digital content easily and quickly. In this study, there are 3 (three) problems that will be overcome by 2 (two) methods, and 3 (three) solutions are produced. The advantage of digital content in social media is that it can be accessed anytime and anywhere, so it is concluded that the use of digital content in social media is able to overcome problems and is a creativepreneur effort found in the promotion system of a journal publisher.   Keywords—Digital Content, Creativepreneur, ATT Journal, Social Media


2021 ◽  
Vol 10 (5) ◽  
pp. 170
Author(s):  
Reinald Besalú ◽  
Carles Pont-Sorribes

In the context of the dissemination of fake news and the traditional media outlets’ loss of centrality, the credibility of digital news emerges as a key factor for today’s democracies. The main goal of this paper was to identify the levels of credibility that Spanish citizens assign to political news in the online environment. A national survey (n = 1669) was designed to assess how the news format affected credibility and likelihood of sharing. Four different news formats were assessed, two of them linked to traditional media (digital newspapers and digital television) and two to social media (Facebook and WhatsApp). Four experimental groups assigned a credibility score and a likelihood of sharing score to four different political news items presented in the aforementioned digital formats. The comparison between the mean credibility scores assigned to the same news item presented in different formats showed significant differences among groups, as did the likelihood of sharing the news. News items shown in a traditional media format, especially digital television, were assigned more credibility than news presented in a social media format, and participants were also more likely to share the former, revealing a more cautious attitude towards social media as a source of news.


Author(s):  
V.T Priyanga ◽  
J.P Sanjanasri ◽  
Vijay Krishna Menon ◽  
E.A Gopalakrishnan ◽  
K.P Soman

The widespread use of social media like Facebook, Twitter, Whatsapp, etc. has changed the way News is created and published; accessing news has become easy and inexpensive. However, the scale of usage and inability to moderate the content has made social media, a breeding ground for the circulation of fake news. Fake news is deliberately created either to increase the readership or disrupt the order in the society for political and commercial benefits. It is of paramount importance to identify and filter out fake news especially in democratic societies. Most existing methods for detecting fake news involve traditional supervised machine learning which has been quite ineffective. In this paper, we are analyzing word embedding features that can tell apart fake news from true news. We use the LIAR and ISOT data set. We churn out highly correlated news data from the entire data set by using cosine similarity and other such metrices, in order to distinguish their domains based on central topics. We then employ auto-encoders to detect and differentiate between true and fake news while also exploring their separability through network analysis.


2021 ◽  
Vol 2 (6) ◽  
Author(s):  
Danilo Vicente Batista de Oliveira ◽  
Ulysses Paulino Albuquerque

Author(s):  
Pedro Lázaro-Rodríguez

A study of digital news on public libraries is presented through media mapping and a thematic and consumption analysis based on Facebook interactions. A total of 7,629 digital news items published in 2019 have been considered. The media mapping includes the evolution of the volume of news publications, the most prominent media outlets and journalists, and the sections in which most news items are published. For the thematic and consumption analysis, the top 250 news items with the highest number of Facebook interactions are considered, defining 15 thematic categories. The most published topics include: new libraries and spaces, collections, and libraries from a historical perspective. The topics that generate the most interactions are the value of libraries (social, human, and cultural capital), libraries from other countries, and new libraries and spaces. The value and originality of the current study lie in the measurement of the consumption of news and digital media through Facebook interactions. The methods used and results obtained also provide new knowledge for the disciplines of Communication and Media Studies by developing the idea of media mapping for its application to other topics and media in future work, as well as for Librarianship, particularly the information obtained on public libraries. Resumen Se presenta un estudio de noticias digitales sobre bibliotecas públicas en España mediante un mapeo de medios y un análisis temático y de consumo basado en las interacciones en Facebook. Se han considerado 7.629 noticias publicadas en 2019. El mapeo de medios incluye la evolución del volumen de la publicación de noticias, los medios y periodistas más prominentes, y las secciones en las que más se publica. Para el análisis temático y de consumo se consideran las 250 noticias con mayores interacciones en Facebook definiendo 15 categorías temáticas. Los temas sobre los que más se publica son: nuevas bibliotecas y espacios, la colección y las bibliotecas desde la perspectiva de su historia. Los que más interacciones y consumo generan son: el valor de las bibliotecas (capital social, humano y cultural), bibliotecas de otros países y las nuevas bibliotecas y espacios. El valor y la originalidad del estudio consisten en considerar las interacciones en Facebook como medida del consumo de noticias y medios digitales. Los métodos y resultados alcanzados aportan además nuevo conocimiento para dos disciplinas: la comunicación y los medios de comunicación, por el desarrollo de la idea del mapeo de medios que puede aplicarse a otros temas y medios en futuros trabajos; y para la biblioteconomía y la documentación, por la información alcanzada sobre las bibliotecas públicas.


2021 ◽  
Author(s):  
Lamya Alderywsh ◽  
Aseel Aldawood ◽  
Ashwag Alasmari ◽  
Farah Aldeijy ◽  
Ghadah Alqubisy ◽  
...  

BACKGROUND There is a serious threat from fake news spreading in technologically advanced societies, including those in the Arab world, via deceptive machine-generated text. In the last decade, Arabic fake news identification has gained increased attention, and numerous detection approaches have revealed some ability to find fake news throughout various data sources. Nevertheless, many existing approaches overlook recent advancements in fake news detection, explicitly to incorporate machine learning algorithms system. OBJECTIVE Tebyan project aims to address the problem of fake news by developing a fake news detection system that employs machine learning algorithms to detect whether the news is fake or real in the context of Arab world. METHODS The project went through numerous phases using an iterative methodology to develop the system. This study analysis incorporated numerous stages using an iterative method to develop the system of misinformation and contextualize fake news regarding society's information. It consists of implementing the machine learning algorithms system using Python to collect genuine and fake news datasets. The study also assesses how information-exchanging behaviors can minimize and find the optimal source of authentication of the emergent news through system testing approaches. RESULTS The study revealed that the main deliverable of this project is the Tebyan system in the community, which allows the user to ensure the credibility of news in Arabic newspapers. It showed that the SVM classifier, on average, exhibited the highest performance results, resulting in 90% in every performance measure of sources. Moreover, the results indicate the second-best algorithm is the linear SVC since it resulted in 90% in performance measure with the societies' typical type of fake information. CONCLUSIONS The study concludes that conducting a system with machine learning algorithms using Python programming language allows the rapid measures of the users' perception to comment and rate the credibility result and subscribing to news email services.


Sign in / Sign up

Export Citation Format

Share Document