scholarly journals A Review on Fact Extraction and Verification

2023 ◽  
Vol 55 (1) ◽  
pp. 1-35
Author(s):  
Giannis Bekoulis ◽  
Christina Papagiannopoulou ◽  
Nikos Deligiannis

We study the fact-checking problem, which aims to identify the veracity of a given claim. Specifically, we focus on the task of Fact Extraction and VERification (FEVER) and its accompanied dataset. The task consists of the subtasks of retrieving the relevant documents (and sentences) from Wikipedia and validating whether the information in the documents supports or refutes a given claim. This task is essential and can be the building block of applications such as fake news detection and medical claim verification. In this article, we aim at a better understanding of the challenges of the task by presenting the literature in a structured and comprehensive way. We describe the proposed methods by analyzing the technical perspectives of the different approaches and discussing the performance results on the FEVER dataset, which is the most well-studied and formally structured dataset on the fact extraction and verification task. We also conduct the largest experimental study to date on identifying beneficial loss functions for the sentence retrieval component. Our analysis indicates that sampling negative sentences is important for improving the performance and decreasing the computational complexity. Finally, we describe open issues and future challenges, and we motivate future research in the task.

Internet of Things aims to automate and add intelligence into existing processes by introducing constrained devices such as sensors and actuators. These constrained devices lack in computation and memory resources and are usually battery powered for ease of deployments. Due to their limited capabilities, the constrained devices usually host proprietary protocols, platforms, data formats and data structures for communications and therefore, are unable to communicate with devices from different vendors. This inability leads to interoperability issues in Internet of Things which, is in fact against the spirit of Internet of things which, envisions interconnection of billions of devices and hence, results in an isolated, vendor-locked and close-loop deployments of IoT solutions. Various approaches have been made by the industry and academia to resolve the interoperability issues amongst constrained devices. However, majority of the solutions are at different layers of the communication stack but do not provide a holistic solution for the problem. In more recent research, there have been theoretical proposals to virtualize constrained devices to abstract their data so that its always available to applications. We have adopted this technique in our research to virtualize the entire Internet of Things network so that virtual TCP/IP based protocols can operate on virtual networks for enabling interoperability. This paper proposes the operations of the Constrained Device Virtualization Algorithm and then simulates it in CloudSIM to derive performance results. The paper further highlights open issues for future research in this area.


Author(s):  
Jie Gui ◽  
Xiaofeng Cong ◽  
Yuan Cao ◽  
Wenqi Ren ◽  
Jun Zhang ◽  
...  

The presence of haze significantly reduces the quality of images. Researchers have designed a variety of algorithms for image dehazing (ID) to restore the quality of hazy images. However, there are few studies that summarize the deep learning (DL) based dehazing technologies. In this paper, we conduct a comprehensive survey on the recent proposed dehazing methods. Firstly, we conclude the commonly used datasets, loss functions and evaluation metrics. Secondly, we group the existing researches of ID into two major categories: supervised ID and unsupervised ID. The core ideas of various influential dehazing models are introduced. Finally, the open issues for future research on ID are pointed out.


10.28945/4259 ◽  
2019 ◽  

Aim/Purpose: The goal of the paper is to consider how the informing phenomenon referred to as “fake news” can be characterized using existing informing science conceptual schemes. Background: A brief review of articles relating to fake news is presented after which potential implications under a variety of informing science frameworks are considered. Methodology: Conceptual synthesis. Contribution: Informing science appears to offer a unique perspective on the fake news phenomenon. Findings: Many aspects of fake news seem consistent with complexity-based conceptual schemes in which its potential for establishing or reinforcing group membership outweighs its factual informing value. Recommendations for Practitioners: The analysis suggests that conventional approaches to combatting fake news, such as reliance on fact checking, may prove largely ineffective because they fail to address the underlying motivation for absorbing and creating fake news. Recommendations for Researchers: Acceptance of fake news may be framed as an element of a broader information seeking strategy independent of the message it conveys. Impact on Society: The societal impact of believing of fake news may prove to be less important than its long term impact on the perceived reliability of informing channels. Future Research: A broad array of research questions warranting further investigation are posed.


2020 ◽  
Author(s):  
Jay Joseph Van Bavel ◽  
Elizabeth Ann Harris ◽  
Philip Pärnamets ◽  
Steve Rathje ◽  
Kimberly Doell ◽  
...  

The spread of misinformation, including “fake news,” propaganda, and conspiracy theories, represents a serious threat to society, as it has the potential to alter beliefs, behavior, and policy. Research is beginning to disentangle how and why misinformation is spread and identify processes that contribute to this social problem. We propose an integrative model to understand the social, political, and cognitive psychology risk factors that underlie the spread of misinformation and highlight strategies that might be effective in mitigating this problem. However, the spread of misinformation is a rapidly growing and evolving problem; thus scholars need to identify and test novel solutions, and work with policy makers to evaluate and deploy these solutions. Hence, we provide a roadmap for future research to identify where scholars should invest their energy in order to have the greatest overall impact.


Author(s):  
Leonardo J. Gutierrez ◽  
Kashif Rabbani ◽  
Oluwashina Joseph Ajayi ◽  
Samson Kahsay Gebresilassie ◽  
Joseph Rafferty ◽  
...  

The increase of mental illness cases around the world can be described as an urgent and serious global health threat. Around 500 million people suffer from mental disorders, among which depression, schizophrenia, and dementia are the most prevalent. Revolutionary technological paradigms such as the Internet of Things (IoT) provide us with new capabilities to detect, assess, and care for patients early. This paper comprehensively survey works done at the intersection between IoT and mental health disorders. We evaluate multiple computational platforms, methods and devices, as well as study results and potential open issues for the effective use of IoT systems in mental health. We particularly elaborate on relevant open challenges in the use of existing IoT solutions for mental health care, which can be relevant given the potential impairments in some mental health patients such as data acquisition issues, lack of self-organization of devices and service level agreement, and security, privacy and consent issues, among others. We aim at opening the conversation for future research in this rather emerging area by outlining possible new paths based on the results and conclusions of this work.


Designs ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 42
Author(s):  
Eric Lazarski ◽  
Mahmood Al-Khassaweneh ◽  
Cynthia Howard

In recent years, disinformation and “fake news” have been spreading throughout the internet at rates never seen before. This has created the need for fact-checking organizations, groups that seek out claims and comment on their veracity, to spawn worldwide to stem the tide of misinformation. However, even with the many human-powered fact-checking organizations that are currently in operation, disinformation continues to run rampant throughout the Web, and the existing organizations are unable to keep up. This paper discusses in detail recent advances in computer science to use natural language processing to automate fact checking. It follows the entire process of automated fact checking using natural language processing, from detecting claims to fact checking to outputting results. In summary, automated fact checking works well in some cases, though generalized fact checking still needs improvement prior to widespread use.


2015 ◽  
Vol 13 (1) ◽  
pp. 19-23 ◽  
Author(s):  
Richard Bull

Purpose – Information and communications technology (ICT) offers a peculiar twenty-first century conundrum, as it offers both a cause and solution to rising carbon emissions. The growth in the digital economy is fueling increased energy consumption while affording new opportunities for reducing the environmental impacts of our daily lives. This paper responds and builds on Patrignani and Whitehouse’s overview of Slow Tech by providing examples of how ICT can be used to reduce energy. Encouraging examples are provided from the field of energy and buildings and implications for wider society are raised. Design/methodology/approach – This paper builds on the previous overview “The Clean Side of Slow Tech”, based on a comprehensive knowledge of literature of the latest developments in the field of digital economy, energy and sustainability. Findings – This paper provides clear and encouraging signs of how ICT can be used to contribute to sustainability through controlling systems more efficiently, facilitating behavioural changes and reducing energy consumption. Future challenges and recommendations for future research are presented. Originality/value – This conceptual paper presents the latest research into the use of ICT in energy reduction and offers cautious, but encouraging signs that while the environmental impact of ICT must not be overlooked, there are benefits to be had from the digital economy.


2007 ◽  
Vol 32 (4) ◽  
pp. 808-817 ◽  
Author(s):  
Stephen S. Cheung

Over the past decade, research interest has risen on the direct effects of temperature on exercise capacity and tolerance, particular in the heat. Two major paradigms have been proposed for how hyperthermia may contribute to voluntary fatigue during exercise in the heat. One suggests that voluntary exhaustion occurs upon the approach or attainment of a critical internal temperature through impairment in a variety of physiological systems. An alternate perspective proposes that thermal inputs modulate the regulation of self-paced workload to minimize heat storage. This review seeks to summarize recent research leading to the development of these two models for hyperthermia and fatigue and explore possible bridges between them. Key areas for future research and development into voluntary exhaustion in the heat include (i) the development of valid and non-invasive means to measure brain temperature, (ii) understanding variability in perception and physiological responses to heat stress across individuals, (iii) extrapolating laboratory studies to field settings, (iv) understanding the failure in behavioural and physiological thermoregulation that leads to exertional heat illness, and (v) the integration of physiological and psychological parameters limiting voluntary exercise in the heat.


2021 ◽  
Vol 13 (1) ◽  
pp. 149-157 ◽  
Author(s):  
Nereida Carrillo ◽  
Marta Montagut

Media literacy of schoolchildren is a key political goal worldwide: institutions and citizens consider media literacy training to be essential – among other aspects – to combat falsehoods and generate healthy public opinion in democratic contexts. In Spain, various media literacy projects address this phenomenon one of which is ‘Que no te la cuelen’ (‘Don’t be fooled’, QNTLC). The project, which has been developed by the authors of this viewpoint, is implemented through theoretical–practical workshops aimed at public and private secondary pupils (academic years 2018–19, 2019–20 and 2020–21), based around training in fake news detection strategies and online fact-checking tools for students and teachers. This viewpoint describes and reflects on this initiative, conducted in 36 training sessions with schoolchildren aged 14–16 years attending schools in Madrid, Valencia and Barcelona. The workshops are based on van Dijk’s media literacy model, with a special focus on the ‘informational skills’ dimension. The amount of information available through all kinds of online platforms implies an extra effort in selecting, evaluating and sharing information, and the workshop focuses on this process through seven steps: suspect, read/listen/watch carefully, check the source, look for other reliable sources, check the data/location, be self-conscious of your bias and decide whether to share the information or not. The QNTLC sessions teach and train these skills combining gamification strategies – online quiz, verification challenges, ‘infoxication’ dynamics in the class – as well as through a public deliberation among students. Participants’ engagement and stakeholders’ interest in the programme suggest that this kind of training is important or, at least, attract the attention of these collectives in the Spanish context.


Sign in / Sign up

Export Citation Format

Share Document