Falsification

Author(s):  
Brian D. Earp

This chapter evaluates falsification. Contemporary philosophers of science tend to look down on falsifiability as overly simplistic. Nevertheless, among many practising scientists, the notion is still regarded as a useful — if imperfect — heuristic for judging the strength of a hypothesis in terms of its ability to generate new insights when combined with careful observation. Falsification also relates to self-correction in science. Often, erroneous findings make their way into the literature. If subsequent researchers conduct the same experiment as the original and yet it fails to yield the same finding, they are often described as having ‘falsified’ (that is, shown to be incorrect) the original result. In this way, mistakes, false alarms, and other non-reproducible output is thought to be identifiable and thus able to be corrected. For self-correction in science through falsification, what is needed are ‘direct’ replications. The chapter then considers the importance of auxiliary assumptions.

2019 ◽  
Vol 30 (3) ◽  
pp. 157-168
Author(s):  
Helmut Hildebrandt ◽  
Jana Schill ◽  
Jana Bördgen ◽  
Andreas Kastrup ◽  
Paul Eling

Abstract. This article explores the possibility of differentiating between patients suffering from Alzheimer’s disease (AD) and patients with other kinds of dementia by focusing on false alarms (FAs) on a picture recognition task (PRT). In Study 1, we compared AD and non-AD patients on the PRT and found that FAs discriminate well between these groups. Study 2 served to improve the discriminatory power of the FA score on the picture recognition task by adding associated pairs. Here, too, the FA score differentiated well between AD and non-AD patients, though the discriminatory power did not improve. The findings suggest that AD patients show a liberal response bias. Taken together, these studies suggest that FAs in picture recognition are of major importance for the clinical diagnosis of AD.


2006 ◽  
Author(s):  
Stephen R. Dixon ◽  
Christopher D. Wickens ◽  
Jason S. McCarley
Keyword(s):  

2013 ◽  
Author(s):  
Angelica Szani ◽  
Katherine Bowers ◽  
Lucienne Pereira-Pasarin ◽  
Marianne E. Lloyd
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1643
Author(s):  
Ming Liu ◽  
Shichao Chen ◽  
Fugang Lu ◽  
Mengdao Xing ◽  
Jingbiao Wei

For target detection in complex scenes of synthetic aperture radar (SAR) images, the false alarms in the land areas are hard to eliminate, especially for the ones near the coastline. Focusing on the problem, an algorithm based on the fusion of multiscale superpixel segmentations is proposed in this paper. Firstly, the SAR images are partitioned by using different scales of superpixel segmentation. For the superpixels in each scale, the land-sea segmentation is achieved by judging their statistical properties. Then, the land-sea segmentation results obtained in each scale are combined with the result of the constant false alarm rate (CFAR) detector to eliminate the false alarms located on the land areas of the SAR image. In the end, to enhance the robustness of the proposed algorithm, the detection results obtained in different scales are fused together to realize the final target detection. Experimental results on real SAR images have verified the effectiveness of the proposed algorithm.


2021 ◽  
Vol 11 (9) ◽  
pp. 3763
Author(s):  
Yunlong Zou ◽  
Jinyu Zhao ◽  
Yuanhao Wu ◽  
Bin Wang

Space object recognition in high Earth orbits (between 2000 km and 36,000 km) is affected by moonlight and clouds, resulting in some bright or saturated image areas and uneven image backgrounds. It is difficult to separate dim objects from complex backgrounds with gray thresholding methods alone. In this paper, we present a segmentation method of star images with complex backgrounds based on correlation between space objects and one-dimensional (1D) Gaussian morphology, and the focus is shifted from gray thresholding to correlation thresholding. We build 1D Gaussian functions with five consecutive column data of an image as a group based on minimum mean square error rules, and the correlation coefficients between the column data and functions are used to extract objects and stars. Then, lateral correlation is repeated around the identified objects and stars to ensure their complete outlines, and false alarms are removed by setting two values, the standard deviation and the ratio of mean square error and variance. We analyze the selection process of each thresholding, and experimental results demonstrate that our proposed correlation segmentation method has obvious advantages in complex backgrounds, which is attractive for object detection and tracking on a cloudy and bright moonlit night.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fintan Nagle ◽  
Alan Johnston

AbstractEncoding and recognising complex natural sequences provides a challenge for human vision. We found that observers could recognise a previously presented segment of a video of a hearth fire when embedded in a longer sequence. Recognition performance declined when the test video was spatially inverted, but not when it was hue reversed or temporally reversed. Sampled motion degraded forwards/reversed playback discrimination, indicating observers were sensitive to the asymmetric pattern of motion of flames. For brief targets, performance increased with target length. More generally, performance depended on the relative lengths of the target and embedding sequence. Increased errors with embedded sequence length were driven by positive responses to non-target sequences (false alarms) rather than omissions. Taken together these observations favour interpreting performance in terms of an incremental decision-making model based on a sequential statistical analysis in which evidence accrues for one of two alternatives. We also suggest that prediction could provide a means of providing and evaluating evidence in a sequential analysis model.


2021 ◽  
Vol 10 (4) ◽  
pp. 199
Author(s):  
Francisco M. Bellas Aláez ◽  
Jesus M. Torres Palenzuela ◽  
Evangelos Spyrakos ◽  
Luis González Vilas

This work presents new prediction models based on recent developments in machine learning methods, such as Random Forest (RF) and AdaBoost, and compares them with more classical approaches, i.e., support vector machines (SVMs) and neural networks (NNs). The models predict Pseudo-nitzschia spp. blooms in the Galician Rias Baixas. This work builds on a previous study by the authors (doi.org/10.1016/j.pocean.2014.03.003) but uses an extended database (from 2002 to 2012) and new algorithms. Our results show that RF and AdaBoost provide better prediction results compared to SVMs and NNs, as they show improved performance metrics and a better balance between sensitivity and specificity. Classical machine learning approaches show higher sensitivities, but at a cost of lower specificity and higher percentages of false alarms (lower precision). These results seem to indicate a greater adaptation of new algorithms (RF and AdaBoost) to unbalanced datasets. Our models could be operationally implemented to establish a short-term prediction system.


2021 ◽  
Vol 28 (2) ◽  
Author(s):  
Sebastian Nielebock ◽  
Robert Heumüller ◽  
Kevin Michael Schott ◽  
Frank Ortmeier

AbstractLack of experience, inadequate documentation, and sub-optimal API design frequently cause developers to make mistakes when re-using third-party implementations. Such API misuses can result in unintended behavior, performance losses, or software crashes. Therefore, current research aims to automatically detect such misuses by comparing the way a developer used an API to previously inferred patterns of the correct API usage. While research has made significant progress, these techniques have not yet been adopted in practice. In part, this is due to the lack of a process capable of seamlessly integrating with software development processes. Particularly, existing approaches do not consider how to collect relevant source code samples from which to infer patterns. In fact, an inadequate collection can cause API usage pattern miners to infer irrelevant patterns which leads to false alarms instead of finding true API misuses. In this paper, we target this problem (a) by providing a method that increases the likelihood of finding relevant and true-positive patterns concerning a given set of code changes and agnostic to a concrete static, intra-procedural mining technique and (b) by introducing a concept for just-in-time API misuse detection which analyzes changes at the time of commit. Particularly, we introduce different, lightweight code search and filtering strategies and evaluate them on two real-world API misuse datasets to determine their usefulness in finding relevant intra-procedural API usage patterns. Our main results are (1) commit-based search with subsequent filtering effectively decreases the amount of code to be analyzed, (2) in particular method-level filtering is superior to file-level filtering, (3) project-internal and project-external code search find solutions for different types of misuses and thus are complementary, (4) incorporating prior knowledge of the misused API into the search has a negligible effect.


Sign in / Sign up

Export Citation Format

Share Document