verification task
Recently Published Documents


TOTAL DOCUMENTS

136
(FIVE YEARS 26)

H-INDEX

16
(FIVE YEARS 2)

2023 ◽  
Vol 55 (1) ◽  
pp. 1-35
Author(s):  
Giannis Bekoulis ◽  
Christina Papagiannopoulou ◽  
Nikos Deligiannis

We study the fact-checking problem, which aims to identify the veracity of a given claim. Specifically, we focus on the task of Fact Extraction and VERification (FEVER) and its accompanied dataset. The task consists of the subtasks of retrieving the relevant documents (and sentences) from Wikipedia and validating whether the information in the documents supports or refutes a given claim. This task is essential and can be the building block of applications such as fake news detection and medical claim verification. In this article, we aim at a better understanding of the challenges of the task by presenting the literature in a structured and comprehensive way. We describe the proposed methods by analyzing the technical perspectives of the different approaches and discussing the performance results on the FEVER dataset, which is the most well-studied and formally structured dataset on the fact extraction and verification task. We also conduct the largest experimental study to date on identifying beneficial loss functions for the sentence retrieval component. Our analysis indicates that sampling negative sentences is important for improving the performance and decreasing the computational complexity. Finally, we describe open issues and future challenges, and we motivate future research in the task.


2021 ◽  
Author(s):  
Ettore Ambrosini ◽  
Francesca Peressotti ◽  
Marisa Gennari ◽  
Silvia Benavides ◽  
Maria Montefinese

The efficient use of knowledge requires semantic control processes to retrieve context-relevant information. So far, it is well established that semantic knowledge, as measured with vocabulary tests, do not decline in aging. Yet, it is still unclear if controlled retrieval -the context-driven retrieval of very specific aspects of semantic knowledge- declines in aging, following the same fate of other forms of cognitive control. Here, we tackled this issue by comparing the performance of younger and older native Italian speakers during a semantic feature verification task. To manipulate the control demands, we parametrically varied the semantic significance, a measure of the salience of the target feature for the cue concept. As compared to their young counterparts, older adults showed a greater performance disruption (in terms of reaction times) as the significance value of the target feature decreased. This result suggests that older people have difficulties in regulating the activation within semantic representation, such that they fail to handle non-dominant (or weakly activated) yet task-relevant semantic information.


Author(s):  
Yu-Sheng Lin ◽  
Zhe-Yu Liu ◽  
Yu-An Chen ◽  
Yu-Siang Wang ◽  
Ya-Liang Chang ◽  
...  

We study the XAI (explainable AI) on the face recognition task, particularly the face verification. Face verification has become a crucial task in recent days and it has been deployed to plenty of applications, such as access control, surveillance, and automatic personal log-on for mobile devices. With the increasing amount of data, deep convolutional neural networks can achieve very high accuracy for the face verification task. Beyond exceptional performances, deep face verification models need more interpretability so that we can trust the results they generate. In this article, we propose a novel similarity metric, called explainable cosine ( xCos ), that comes with a learnable module that can be plugged into most of the verification models to provide meaningful explanations. With the help of xCos , we can see which parts of the two input faces are similar, where the model pays its attention to, and how the local similarities are weighted to form the output xCos score. We demonstrate the effectiveness of our proposed method on LFW and various competitive benchmarks, not only resulting in providing novel and desirable model interpretability for face verification but also ensuring the accuracy as plugging into existing face recognition models.


2021 ◽  
pp. 1-18
Author(s):  
Wim Fias ◽  
Muhammet Ikbal Sahan ◽  
Daniel Ansari ◽  
Ian M. Lyons

Abstract This fMRI study aimed at unraveling the neural basis of learning alphabet–arithmetic facts, as a proxy of the transition from slow and effortful procedural counting-based processing to fast and effortless processing as it occurs in learning addition arithmetic facts. Neural changes were tracked while participants solved alphabet–arithmetic problems in a verification task (e.g., F + 4 = J). Problems were repeated across four learning blocks. Two neural networks with opposed learning-related changes were identified. Activity in a network consisting of basal ganglia and parieto-frontal areas decreased with learning, which is in line with a reduction of the involvement of procedure-based processing. Conversely, activity in a network involving the left angular gyrus and, to a lesser extent, the hippocampus gradually increases with learning, evidencing the gradual involvement of retrieval-based processing. Connectivity analyses gave insight in the functional relationship between the two networks. Despite the opposing learning-related trajectories, it was found that both networks become more integrated. Taking alphabet–arithmetic as a proxy for learning arithmetic, the present results have implications for current theories of learning arithmetic facts and can give direction to future developments.


Author(s):  
Martin Blicha ◽  
Antti E. J. Hyvärinen ◽  
Jan Kofroň ◽  
Natasha Sharygina

AbstractThe use of propositional logic and systems of linear inequalities over reals is a common means to model software for formal verification. Craig interpolants constitute a central building block in this setting for over-approximating reachable states, e.g. as candidates for inductive loop invariants. Interpolants for a linear system can be efficiently computed from a Simplex refutation by applying the Farkas’ lemma. However, these interpolants do not always suit the verification task—in the worst case, they can even prevent the verification algorithm from converging. This work introduces the decomposed interpolants, a fundamental extension of the Farkas interpolants, obtained by identifying and separating independent components from the interpolant structure, using methods from linear algebra. We also present an efficient polynomial algorithm to compute decomposed interpolants and analyse its properties. We experimentally show that the use of decomposed interpolants in model checking results in immediate convergence on instances where state-of-the-art approaches diverge. Moreover, since being based on the efficient Simplex method, the approach is very competitive in general.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fabian C. G. van den Berg ◽  
Peter de Weerd ◽  
Lisa M. Jonkman

AbstractFingers facilitate number learning and arithmetic processing in early childhood. The current study investigated whether images of early-learned, culturally-typical (canonical), finger montring patterns presenting smaller (2,3,4) or larger (7,8,9) quantities still facilitate adults’ performance and neural processing in a math verification task. Twenty-eight adults verified solutions to simple addition problems that were shown in the form of canonical or non-canonical finger-number montring patterns while measuring Event Related Potentials (ERPs). Results showed more accurate and faster sum verification when sum solutions were shown by canonical (versus non-canonical) finger patterns. Canonical finger montring patterns 2–4 led to faster responses independent of whether they presented correct or incorrect sum solutions and elicited an enhanced early right-parietal P2p response, whereas canonical configurations 7–9 only facilitated performance in correct sum solution trials without evoking P2p effects. The later central-parietal P3 was enhanced to all canonical finger patterns irrespective of numerical range. These combined results provide behavioral and brain evidence for canonical cardinal finger patterns still having facilitating effects on adults’ number processing. They further suggest that finger montring configurations of numbers 2–4 have stronger internalized associations with other magnitude representations, possibly established through their mediating role in the developmental phase in which children acquire the numerical meaning of the first four number symbols.


2021 ◽  
pp. 174702182110226
Author(s):  
Jasinta D. M. Dewi ◽  
Jeanne Bagnoud ◽  
Catherine Thevenot

In this study, 17 adult participants were trained to solve alphabet-arithmetic problems using a production task (e.g., C + 3 = ?). The evolution of their performance across 12 practice sessions was compared to the results obtained in past studies using verification tasks (e.g., is C + 3 = F correct?). We show that, irrespective of the experimental paradigm used, there is no evidence for a shift from counting to retrieval during training. However and again regardless of the paradigm, problems with the largest addend constitute an exception to the general pattern of results obtained. Contrary to other problems, their answers seem to be deliberately memorized by participants relatively early during training. All in all, we conclude that verification and production tasks lead to similar patterns of results, which can therefore both confidently be used to discuss current theories of learning. Still, deliberate memorization of problems with the largest addend appears earlier and more often in a production than a verification task. This last result is discussed in light of retrieval models.


2021 ◽  
Vol 30 ◽  
pp. 206
Author(s):  
Tyler Zarus Knowlton ◽  
Paul Pietroski ◽  
Alexander Williams ◽  
Justin Halberda ◽  
Jeffrey Lidz

Quantificational determiners have meanings that are "conservative" in the following sense: in sentences, repeating a determiner's internal argument within its external argument is logically insignificant. Using a verification task to probe which sets (or properties) of entities are represented when participants evaluate sentences, we test the predictions of three potential explanations for the cross-linguistic yet substantive conservativity constraint. According to "lexical restriction" views, words like every express relations that are exhibited by pairs of sets, but only some of these relations can be expressed with determiners. An "interface filtering" view retains the relational conception of determiner meanings, while replacing appeal to lexical filters (on relations of the relevant type) with special rules for interpreting the combination of a quantificational expression (Det NP) with its syntactic context and a ban on meanings that lead to triviality. The contrasting idea of "ordered predication" is that determiners don't express genuine relations. Instead, the second argument provides the scope of a monadic quantifier, while the first argument selects the domain for that quantifier: the sequences with respect to which it is evaluated. On this view, a determiner's two arguments each have a different logical status, suggesting that they might have a different psychological status as well. We find evidence that this is the case: When evaluating sentences like every big circle is blue, participants mentally group the things specified by the determiner's first argument (e.g., the big circles) but not the things specified by the second argument (e.g., the blue things) or the intersection of both (e.g., the big blue circles). These results suggest that the phenomenon of conservativity is due to ordered predication.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Federico Centrone ◽  
Niraj Kumar ◽  
Eleni Diamanti ◽  
Iordanis Kerenidis

AbstractIn recent years, many computational tasks have been proposed as candidates for showing a quantum computational advantage, that is an advantage in the time needed to perform the task using a quantum instead of a classical machine. Nevertheless, practical demonstrations of such an advantage remain particularly challenging because of the difficulty in bringing together all necessary theoretical and experimental ingredients. Here, we show an experimental demonstration of a quantum computational advantage in a prover-verifier interactive setting, where the computational task consists in the verification of an NP-complete problem by a verifier who only gets limited information about the proof sent by an untrusted prover in the form of a series of unentangled quantum states. We provide a simple linear optical implementation that can perform this verification task efficiently (within a few seconds), while we also provide strong evidence that, fixing the size of the proof, a classical computer would take much longer time (assuming only that it takes exponential time to solve an NP-complete problem). While our computational advantage concerns a specific task in a scenario of mostly theoretical interest, it brings us a step closer to potential useful applications, such as server-client quantum computing.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10489
Author(s):  
Sonia Y. Cárdenas ◽  
Juan Silva-Pereyra ◽  
Belén Prieto-Corona ◽  
Susana A. Castro-Chavira ◽  
Thalía Fernández

Introduction Dyscalculia is a specific learning disorder affecting the ability to learn certain math processes, such as arithmetic data recovery. The group of children with dyscalculia is very heterogeneous, in part due to variability in their working memory (WM) deficits. To assess the brain response to arithmetic data recovery, we applied an arithmetic verification task during an event-related potential (ERP) recording. Two effects have been reported: the N400 effect (higher negative amplitude for incongruent than for congruent condition), associated with arithmetic incongruency and caused by the arithmetic priming effect, and the LPC effect (higher positive amplitude for the incongruent compared to the congruent condition), associated with a reevaluation process and modulated by the plausibility of the presented condition. This study aimed to (a) compare arithmetic processing between children with dyscalculia and children with good academic performance (GAP) using ERPs during an addition verification task and (b) explore, among children with dyscalculia, the relationship between WM and ERP effects. Materials and Methods EEGs of 22 children with dyscalculia (DYS group) and 22 children with GAP (GAP group) were recorded during the performance of an addition verification task. ERPs synchronized with the probe stimulus were computed separately for the congruent and incongruent probes, and included only epochs with correct answers. Mixed 2-way ANOVAs for response times and correct answers were conducted. Comparisons between groups and correlation analyses using ERP amplitude data were carried out through multivariate nonparametric permutation tests. Results The GAP group obtained more correct answers than the DYS group. An arithmetic N400 effect was observed in the GAP group but not in the DYS group. Both groups displayed an LPC effect. The larger the LPC amplitude was, the higher the WM index. Two subgroups were found within the DYS group: one with an average WM index and the other with a lower than average WM index. These subgroups displayed different ERPs patterns. Discussion The results indicated that the group of children with dyscalculia was very heterogeneous and therefore failed to show a robust LPC effect. Some of these children had WM deficits. When WM deficits were considered together with dyscalculia, an atypical ERP pattern that reflected their processing difficulties emerged. Their lack of the arithmetic N400 effect suggested that the processing in this step was not useful enough to produce an answer; thus, it was necessary to reevaluate the arithmetic-calculation process (LPC) in order to deliver a correct answer. Conclusion Given that dyscalculia is a very heterogeneous deficit, studies examining dyscalculia should consider exploring deficits in WM because the whole group of children with dyscalculia seems to contain at least two subpopulations that differ in their calculation process.


Sign in / Sign up

Export Citation Format

Share Document