scholarly journals Detecting hydrological connectivity using causal inference from time-series: synthetic and real karstic study cases

2021 ◽  
Author(s):  
Damien Delforge ◽  
Olivier de Viron ◽  
Marnik Vanclooster ◽  
Michel Van Camp ◽  
Arnaud Watlet

Abstract. We investigate the potential of causal inference methods (CIMs) to reveal hydrological connections from time-series. Four CIMs are selected from two criteria, linear or nonlinear, and bivariate or multivariate. A priori, multivariate and nonlinear CIMs are best suited for revealing hydrological connections because they suit nonlinear processes and deal with confounding factors such as rainfall, evapotranspiration, or seasonality. The four methods are applied to a synthetic case and a real karstic study case. The synthetic experiment indicates that, unlike the other methods, the multivariate nonlinear framework has a low false-positive rate and allows for ruling out a connection between two disconnected reservoirs forced with similar effective precipitation. However, the multivariate nonlinear method appears unstable when it comes to real cases, making the overall meaning of the causal links uncertain. Nevertheless, all CIMs bring valuable insights into the system’s dynamics, making them a cost-effective and recommendable tool for exploring data. Still, causal inference remains attached to subjective choices and operational constraints while building the dataset or constraining the analysis. As a result, the robustness of the conclusions that the CIMs can draw deserves to be questioned, especially with real and imperfect data. Therefore, alongside research perspectives, we encourage a flexible, informed, and limit-aware use of CIMs, without omitting any other approach that aims at the causal understanding of a system.

2005 ◽  
Vol 129 (12) ◽  
pp. 1575-1584 ◽  
Author(s):  
Rose C. Anton ◽  
Thomas M. Wheeler

Abstract Context.—Preoperative fine-needle aspiration of thyroid lesions has greatly diminished the need for surgical evaluation. However, because thyroid nodules are common lesions, many still require surgical intervention and represent a substantial number of cases that the pathologist encounters in the frozen section laboratory. Objective.—Comprehensive reviews of frozen section indications, as well as gross, cytologic, and histologic features of the most common and diagnostically important thyroid and parathyroid lesions, are presented to provide a guideline for proper triage and management of these cases in the frozen section laboratory. The most common pitfalls are discussed in an attempt to avoid discordant diagnoses. Data Sources.—Thyroid lobectomy, subtotal or total thyroidectomy, and parathyroid biopsy or parathyroidectomy cases are included in this review. Conclusions.—The frozen section evaluation of thyroid and parathyroid lesions remains a highly accurate procedure with a low false-positive rate. Gross inspection, complemented by cytologic and histologic review, provides the surgeon with the rapid, reliable, cost-effective information necessary for optimum patient care.


2020 ◽  
Vol 47 (10) ◽  
pp. 749-756
Author(s):  
José A. Sainz ◽  
María R. Torres ◽  
Ignacio Peral ◽  
Reyes Granell ◽  
Manuel Vargas ◽  
...  

<b><i>Introduction:</i></b> Contingent cell-free (cf) DNA screening on the basis of the first-trimester combined test (FCT) results has emerged as a cost-effective strategy for screening of trisomy 21 (T21). <b><i>Objectives:</i></b> To assess performance, patients’ uptake, and cost of contingent cfDNA screening and to compare them with those of the established FCT. <b><i>Methods:</i></b> This is a prospective cohort study including all singleton pregnancies attending to their FCT for screening of T21 at 2 university hospitals in South Spain. When the FCT risk was ≥1:50, there were major fetal malformations, or the nuchal translucency was ≥3.5 mm, women were recommended invasive testing (IT); if the risk was between 1:50 and 1:270, women were recommended cfDNA testing; and for risks bellow 1:270, no further testing was recommended. Detection rate (DR), false-positive rate (FPR), patients’ uptake, and associated costs were evaluated. <b><i>Results:</i></b> We analyzed 10,541 women, including 46 T21 cases. DR of our contingent strategy was 89.1% (41/46) at 1.4% (146/10,541) FPR. Uptake of cfDNA testing was 91.2% (340/373), and overall IT rate was 2.0%. The total cost of our strategy was €1,462,895.7, similar to €1,446,525.7 had cfDNA testing not been available. <b><i>Conclusions:</i></b> Contingent cfDNA screening shows high DR, low IT rate, and high uptake at a similar cost than traditional screening.


2019 ◽  
Vol 489 (2) ◽  
pp. 2117-2129 ◽  
Author(s):  
Paul J Morris ◽  
Nachiketa Chakraborty ◽  
Garret Cotter

ABSTRACT Time-series analysis allows for the determination of the Power Spectral Density (PSD) and Probability Density Function (PDF) for astrophysical sources. The former of these illustrates the distribution of power at various time-scales, typically taking a power-law form, while the latter characterizes the distribution of the underlying stochastic physical processes, with Gaussian and lognormal functional forms both physically motivated. In this paper, we use artificial time series generated using the prescription of Timmer & Koenig to investigate connections between the PDF and PSD. PDFs calculated for these artificial light curves are less likely to be well described by a Gaussian functional form for steep (Γ⪆1) PSD indices due to weak non-stationarity. Using the Fermi LAT monthly light curve of the blazar PKS2155-304 as an example, we prescribe and calculate a false positive rate that indicates how likely the PDF is to be attributed an incorrect functional form. Here, we generate large numbers of artificial light curves with intrinsically normally distributed PDFs and with statistical properties consistent with observations. These are used to evaluate the probabilities that either Gaussian or lognormal functional forms better describe the PDF. We use this prescription to show that PKS2155-304 requires a high prior probability of having a normally distributed PDF, $P(\rm {G})~$ ≥ 0.82, for the calculated PDF to prefer a Gaussian functional form over a lognormal. We present possible choices of prior and evaluate the probability that PKS2155-304 has a lognormally distributed PDF for each.


2019 ◽  
Vol 623 ◽  
pp. A39 ◽  
Author(s):  
Michael Hippke ◽  
René Heller

We present a new method to detect planetary transits from time-series photometry, the transit least squares (TLS) algorithm. TLS searches for transit-like features while taking the stellar limb darkening and planetary ingress and egress into account. We have optimized TLS for both signal detection efficiency (SDE) of small planets and computational speed. TLS analyses the entire, unbinned phase-folded light curve. We compensated for the higher computational load by (i.) using algorithms such as “Mergesort” (for the trial orbital phases) and by (ii.) restricting the trial transit durations to a smaller range that encompasses all known planets, and using stellar density priors where available. A typical K2 light curve, including 80 d of observations at a cadence of 30 min, can be searched with TLS in ∼10 s real time on a standard laptop computer, as fast as the widely used box least squares (BLS) algorithm. We perform a transit injection-retrieval experiment of Earth-sized planets around sun-like stars using synthetic light curves with 110 ppm white noise per 30 min cadence, corresponding to a photometrically quiet KP = 12 star observed with Kepler. We determine the SDE thresholds for both BLS and TLS to reach a false positive rate of 1% to be SDE = 7 in both cases. The resulting true positive (or recovery) rates are ∼93% for TLS and ∼76% for BLS, implying more reliable detections with TLS. We also test TLS with the K2 light curve of the TRAPPIST-1 system and find six of seven Earth-sized planets using an iterative search for increasingly lower signal detection efficiency, the phase-folded transit of the seventh planet being affected by a stellar flare. TLS is more reliable than BLS in finding any kind of transiting planet but it is particularly suited for the detection of small planets in long time series from Kepler, TESS, and PLATO. We make our python implementation of TLS publicly available.


2019 ◽  
Vol 128 (4) ◽  
pp. 970-995
Author(s):  
Rémy Sun ◽  
Christoph H. Lampert

Abstract We study the problem of automatically detecting if a given multi-class classifier operates outside of its specifications (out-of-specs), i.e. on input data from a different distribution than what it was trained for. This is an important problem to solve on the road towards creating reliable computer vision systems for real-world applications, because the quality of a classifier’s predictions cannot be guaranteed if it operates out-of-specs. Previously proposed methods for out-of-specs detection make decisions on the level of single inputs. This, however, is insufficient to achieve low false positive rate and high false negative rates at the same time. In this work, we describe a new procedure named KS(conf), based on statistical reasoning. Its main component is a classical Kolmogorov–Smirnov test that is applied to the set of predicted confidence values for batches of samples. Working with batches instead of single samples allows increasing the true positive rate without negatively affecting the false positive rate, thereby overcoming a crucial limitation of single sample tests. We show by extensive experiments using a variety of convolutional network architectures and datasets that KS(conf) reliably detects out-of-specs situations even under conditions where other tests fail. It furthermore has a number of properties that make it an excellent candidate for practical deployment: it is easy to implement, adds almost no overhead to the system, works with any classifier that outputs confidence scores, and requires no a priori knowledge about how the data distribution could change.


2009 ◽  
Vol 2009 ◽  
pp. 1-11 ◽  
Author(s):  
Lourens Waldorp

As a consequence of misspecification of the hemodynamic response and noise variance models, tests on general linear model coefficients are not valid. Robust estimation of the variance of the general linear model (GLM) coefficients in fMRI time series is therefore essential. In this paper an alternative method to estimate the variance of the GLM coefficients accurately is suggested and compared to other methods. The alternative, referred to as the sandwich, is based primarily on the fact that the time series are obtained from multiple exchangeable stimulus presentations. The analytic results show that the sandwich is unbiased. Using this result, it is possible to obtain an exact statistic which keeps the 5% false positive rate. Extensive Monte Carlo simulations show that the sandwich is robust against misspeci cation of the autocorrelations and of the hemodynamic response model. The sandwich is seen to be in many circumstances robust, computationally efficient, and flexible with respect to correlation structures across the brain. In contrast, the smoothing approach can be robust to a certain extent but only with specific knowledge of the circumstances for the smoothing parameter.


2007 ◽  
Vol 23 (2) ◽  
pp. 192-204 ◽  
Author(s):  
Ingolf Griebsch ◽  
Rachel L. Knowles ◽  
Jacqueline Brown ◽  
Catherine Bull ◽  
Christopher Wren ◽  
...  

Objectives: Congenital heart defects (CHD) are an important cause of death and morbidity in early childhood, but the effectiveness of alternative newborn screening strategies in preventing the collapse or death—before diagnosis—of infants with treatable but life-threatening defects is uncertain. We assessed their effectiveness and efficiency to inform policy and research priorities.Methods: We compared the effectiveness of clinical examination alone and clinical examination with either pulse oximetry or screening echocardiography in making a timely diagnosis of life-threatening CHD or in diagnosing clinically significant CHD. We contrasted their cost-effectiveness, using a decision-analytic model based on 100,000 live births, and assessed future research priorities using value of information analysis.Results: Clinical examination alone, pulse oximetry, and screening echocardiography achieved 34.0, 70.6, and 71.3 timely diagnoses per 100,000 live births, respectively. This finding represents an additional cost per additional timely diagnosis of £4,894 and £4,496,666 for pulse oximetry and for screening echocardiography. The equivalent costs for clinically significant CHD are £1,489 and £36,013, respectively. Key determinants of cost-effectiveness are detection rates and screening test costs. The false-positive rate is very high with screening echocardiography (5.4 percent), but lower with pulse oximetry (1.3 percent) or clinical examination alone (.5 percent).Conclusions: Adding pulse oximetry to clinical examination is likely to be a cost-effective newborn screening strategy for CHD, but further research is required before this policy can be recommended. Screening echocardiography is unlikely to be cost-effective, unless the detection of all clinically significant CHD is considered beneficial and a 5 percent false-positive rate acceptable.


2019 ◽  
Vol 485 (5) ◽  
pp. 558-563
Author(s):  
V. F. Kravchenko ◽  
V. I. Ponomaryov ◽  
V. I. Pustovoit ◽  
E. Rendon-Gonzalez

A new computer-aided detection (CAD) system for lung nodule detection and selection in computed tomography scans is substantiated and implemented. The method consists of the following stages: preprocessing based on threshold and morphological filtration, the formation of suspicious regions of interest using a priori information, the detection of lung nodules by applying the fractal dimension transformation, the computation of informative texture features for identified lung nodules, and their classification by applying the SVM and AdaBoost algorithms. A physical interpretation of the proposed CAD system is given, and its block diagram is constructed. The simulation results based on the proposed CAD method demonstrate advantages of the new approach in terms of standard criteria, such as sensitivity and the false-positive rate.


2020 ◽  
Author(s):  
Hyemin Han

Previous research has shown the potential value of Bayesian methods in fMRI (functional magnetic resonance imaging) analysis. For instance, the results from Bayes factor-applied second-level fMRI analysis showed a higher hit rate compared with frequentist second-level fMRI analysis, suggesting greater sensitivity. Although the method reported more positives as a result of the higher sensitivity, it was able to maintain a reasonable level of selectivity in term of the false positive rate. Moreover, employment of the multiple comparison correction method to update the default prior distribution significantly improved the performance of Bayesian second-level fMRI analysis. However, previous studies have utilized the default prior distribution and did not consider the nature of each individual study. Thus, in the present study, a method to adjust the Cauchy prior distribution based on a priori information, which can be acquired from the results of relevant previous studies, was proposed and tested. A Cauchy prior distribution was adjusted based on the contrast, noise strength, and proportion of true positives that were estimated from a meta-analysis of relevant previous studies. In the present study, both the simulated images and real contrast images from two previous studies were used to evaluate the performance of the proposed method. The results showed that the employment of the prior adjustment method resulted in improved performance of Bayesian second-level fMRI analysis.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10861
Author(s):  
Hyemin Han

Previous research has shown the potential value of Bayesian methods in fMRI (functional magnetic resonance imaging) analysis. For instance, the results from Bayes factor-applied second-level fMRI analysis showed a higher hit rate compared with frequentist second-level fMRI analysis, suggesting greater sensitivity. Although the method reported more positives as a result of the higher sensitivity, it was able to maintain a reasonable level of selectivity in term of the false positive rate. Moreover, employment of the multiple comparison correction method to update the default prior distribution significantly improved the performance of Bayesian second-level fMRI analysis. However, previous studies have utilized the default prior distribution and did not consider the nature of each individual study. Thus, in the present study, a method to adjust the Cauchy prior distribution based on a priori information, which can be acquired from the results of relevant previous studies, was proposed and tested. A Cauchy prior distribution was adjusted based on the contrast, noise strength, and proportion of true positives that were estimated from a meta-analysis of relevant previous studies. In the present study, both the simulated images and real contrast images from two previous studies were used to evaluate the performance of the proposed method. The results showed that the employment of the prior adjustment method resulted in improved performance of Bayesian second-level fMRI analysis.


Sign in / Sign up

Export Citation Format

Share Document