fact checking
Recently Published Documents


TOTAL DOCUMENTS

662
(FIVE YEARS 490)

H-INDEX

25
(FIVE YEARS 8)

2023 ◽  
Vol 55 (1) ◽  
pp. 1-35
Author(s):  
Giannis Bekoulis ◽  
Christina Papagiannopoulou ◽  
Nikos Deligiannis

We study the fact-checking problem, which aims to identify the veracity of a given claim. Specifically, we focus on the task of Fact Extraction and VERification (FEVER) and its accompanied dataset. The task consists of the subtasks of retrieving the relevant documents (and sentences) from Wikipedia and validating whether the information in the documents supports or refutes a given claim. This task is essential and can be the building block of applications such as fake news detection and medical claim verification. In this article, we aim at a better understanding of the challenges of the task by presenting the literature in a structured and comprehensive way. We describe the proposed methods by analyzing the technical perspectives of the different approaches and discussing the performance results on the FEVER dataset, which is the most well-studied and formally structured dataset on the fact extraction and verification task. We also conduct the largest experimental study to date on identifying beneficial loss functions for the sentence retrieval component. Our analysis indicates that sampling negative sentences is important for improving the performance and decreasing the computational complexity. Finally, we describe open issues and future challenges, and we motivate future research in the task.


2022 ◽  
Vol 2 ◽  
pp. 8
Author(s):  
Rubén Arcos ◽  
Manuel Gertrudix ◽  
Cristina Arribas ◽  
Monica Cardarilli

The dissemination of purposely deceitful or misleading content to target audiences for political aims or economic purposes constitutes a threat to democratic societies and institutions, and is being increasingly recognized as a major security threat, particularly after evidence and allegations of hostile foreign interference in several countries surfaced in the last five years. Disinformation can also be part of hybrid threat activities. This research paper examines findings on the effects of disinformation and addresses the question of how effective counterstrategies against digital disinformation are, with the aim of assessing the impact of responses such as the exposure and disproval of disinformation content and conspiracy theories. The paper’s objective is to synthetize the main scientific findings on disinformation effects and on the effectiveness of debunking, inoculation, and forewarning strategies against digital disinformation. A mixed methodology is used, combining qualitative interpretive analysis and structured technique for evaluating scientific literature such as a systematic literature review (SLR), following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework.


Semantic Web ◽  
2022 ◽  
pp. 1-35
Author(s):  
Katarina Boland ◽  
Pavlos Fafalios ◽  
Andon Tchechmedjiev ◽  
Stefan Dietze ◽  
Konstantin Todorov

Analyzing statements of facts and claims in online discourse is subject of a multitude of research areas. Methods from natural language processing and computational linguistics help investigate issues such as the spread of biased narratives and falsehoods on the Web. Related tasks include fact-checking, stance detection and argumentation mining. Knowledge-based approaches, in particular works in knowledge base construction and augmentation, are concerned with mining, verifying and representing factual knowledge. While all these fields are concerned with strongly related notions, such as claims, facts and evidence, terminology and conceptualisations used across and within communities vary heavily, making it hard to assess commonalities and relations of related works and how research in one field may contribute to address problems in another. We survey the state-of-the-art from a range of fields in this interdisciplinary area across a range of research tasks. We assess varying definitions and propose a conceptual model – Open Claims – for claims and related notions that takes into consideration their inherent complexity, distinguishing between their meaning, linguistic representation and context. We also introduce an implementation of this model by using established vocabularies and discuss applications across various tasks related to online discourse analysis.


2022 ◽  
Author(s):  
Fabio Giglietto ◽  
Manolo Farci ◽  
Giada Marino ◽  
Serena Mottola ◽  
Tommaso Radicioni ◽  
...  

This report presents the outcomes of a project aimed at developing and testing a prototype tool that supports and speeds-up the work of fact-checkers and de-bunkers by surfacing and ranking potentially problematic information circulated on social media with a content-agnostic approach. The tool itself is the result of a multi-year research activity carried on within the Mapping Italian News Research Program of the University of Urbino Carlo Bo to study the strategies, tactics and goals of information operations aimed at manipulating the Italian public opinion by exploiting the vulnerabilities of the contemporary media ecosystem. This research activity led to developing original studies, public reports, new methods, maps and tools employed to study the activity of Italian nefarious social media actors aimed at amplifying the reach and impact of problematic information by coordinating their efforts. Tracking these actors proved instrumental to observe the “infodemic” unraveling during the early days of COVID-19 outbreak in Italy. Combining this existing knowledge with a range of original tools and data sources provided by Meta’s Facebook Open Research Initiative (Fort) and by The International Fact-Checking Network (IFCN) at Poynter, the report: documents those early days by highlighting a list of widely viewed and interacted links circulated on Facebook; traces the establishment, growth and evolution of Italian covid-skeptic coordinated networks on Facebook; presents a comprehensive and updated map of the activities performed by these networks of nefarious social media actors; unveils a set of original tactics and strategies employed by these actors to adjust their operations to the mitigation efforts adopted by social media platforms to reduce the spread of problematic information; describes the circulation of three specific piece of problematic information; provides an overview of the outcomes of the testing phase (carried out in collaboration with Facta.news) of a prototype tool that surfaces and ranks potentially problematic information circulated on social media with a content-agnostic approach.


Intexto ◽  
2022 ◽  
pp. 101015
Author(s):  
Marta Thaís Alencar ◽  
Jacqueline Lima Dourado
Keyword(s):  

O presente artigo analisa a checagem colaborativa do Fato ou Fake, serviço de monitoramento e checagem de conteúdos do Grupo Globo. Para tanto, o trabalho realiza um estudo de caso a partir das diretrizes contidas no Plano Editorial do Grupo e na seção Fato ou Fake do Portal G1, que pontuam o fact-checking como estratégia de credibilidade.


2022 ◽  
pp. 227-248
Author(s):  
And Algül ◽  
Gamze Sinem Kuruoğlu

Social media has become a platform where fake news is abundant. This issue has shown itself effectively yet again during the COVID-19 pandemic. The WHO has named this information pollution in the COVID-19 period as “infodemic.” Due to the COVID-19 pandemic, access to authentic news has become more important than ever before in 2020. Within this context, verified news on teyit.org and dogrulukpayi.com in 2020 were analyzed. Besides, text analysis was conducted on 161 items of news originating in Twitter, and the most-commonly-used words have been found through analysis. The research suggests that news items on topics related to the agenda, which the public feels a need to be informed about, are more likely to be fake news and that Twitter, used broadly by the public to receive news, is preferred as it is easier and faster to spread fake news.


2022 ◽  
Vol 18 (1) ◽  
pp. 0-0

This paper explores the dynamics of justification in the wake of a rumor outbreak on social media. Specifically, it examines the extent to which the five types of justification—descriptive argumentation, presumptive argumentation, evidentialism, truth skepticism, and epistemological skepticism—manifested in different voices including pro-rumor, anti-rumor and doubts before and after fact-checking. Content analysis was employed on 1,911 tweets related to a rumor outbreak. Non-parametric cross-tabulation was used to uncover nuances in information sharing before and after fact-checking. Augmenting the literature which suggests the online community’s susceptibility to hoaxes, the paper offers a silver lining: Users are responsible enough to correct rumors during the later phase of a rumor lifecycle. This sense of public-spiritedness can be harnessed by knowledge management practitioners and public relations professionals for crowdsourced rumor refutation.


Sign in / Sign up

Export Citation Format

Share Document