A data-driven post-filter design based on spatially and temporally smoothed a priori SNR

Author(s):  
Huajun Yu ◽  
Tim Fingscheidt
Geophysics ◽  
1998 ◽  
Vol 63 (3) ◽  
pp. 1053-1061 ◽  
Author(s):  
Margaret J. Eppstein ◽  
David E. Dougherty

We propose a practical new method for 3-D traveltime tomography. The method combines an efficient approximation to the extended Kalman filter for rapid, accurate, nonlinear tomography, with the concept of data‐driven zonation, in which the dimensionality and geometry of the parameterization are dynamically determined using cluster analysis and region merging by random field union. The Bayesian filter uses geostatistics as it recursively incorporates measurements in an optimal (minimum‐variance) manner. Geologic knowledge is introduced through a priori estimates of the parameter field and its spatial covariance. Conditional estimates of the parameter number, geometry, value, and covariance are evolved. An initial decomposition of the 3-D domain into 2-D slices, the simplified filter design, and the data‐driven reduction in parameter dimensionality, all contribute to make the method computationally feasible for large 3-D domains. The method is verified by the inversion of crosswell seismic traveltimes to 3-D estimates of seismic slowness in four synthetic heterogeneous domains. Starting with homogeneous, fully distributed slowness fields, and no knowledge of the true covariance structure, the method is able to accurately and efficiently resolve the structure and values of markedly different domains.


Author(s):  
Laure Fournier ◽  
Lena Costaridou ◽  
Luc Bidaut ◽  
Nicolas Michoux ◽  
Frederic E. Lecouvet ◽  
...  

Abstract Existing quantitative imaging biomarkers (QIBs) are associated with known biological tissue characteristics and follow a well-understood path of technical, biological and clinical validation before incorporation into clinical trials. In radiomics, novel data-driven processes extract numerous visually imperceptible statistical features from the imaging data with no a priori assumptions on their correlation with biological processes. The selection of relevant features (radiomic signature) and incorporation into clinical trials therefore requires additional considerations to ensure meaningful imaging endpoints. Also, the number of radiomic features tested means that power calculations would result in sample sizes impossible to achieve within clinical trials. This article examines how the process of standardising and validating data-driven imaging biomarkers differs from those based on biological associations. Radiomic signatures are best developed initially on datasets that represent diversity of acquisition protocols as well as diversity of disease and of normal findings, rather than within clinical trials with standardised and optimised protocols as this would risk the selection of radiomic features being linked to the imaging process rather than the pathology. Normalisation through discretisation and feature harmonisation are essential pre-processing steps. Biological correlation may be performed after the technical and clinical validity of a radiomic signature is established, but is not mandatory. Feature selection may be part of discovery within a radiomics-specific trial or represent exploratory endpoints within an established trial; a previously validated radiomic signature may even be used as a primary/secondary endpoint, particularly if associations are demonstrated with specific biological processes and pathways being targeted within clinical trials. Key Points • Data-driven processes like radiomics risk false discoveries due to high-dimensionality of the dataset compared to sample size, making adequate diversity of the data, cross-validation and external validation essential to mitigate the risks of spurious associations and overfitting. • Use of radiomic signatures within clinical trials requires multistep standardisation of image acquisition, image analysis and data mining processes. • Biological correlation may be established after clinical validation but is not mandatory.


Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 99 ◽  
Author(s):  
Yueqi Gu ◽  
Orhun Aydin ◽  
Jacqueline Sosa

Post-earthquake relief zone planning is a multidisciplinary optimization problem, which required delineating zones that seek to minimize the loss of life and property. In this study, we offer an end-to-end workflow to define relief zone suitability and equitable relief service zones for Los Angeles (LA) County. In particular, we address the impact of a tsunami in the study due to LA’s high spatial complexities in terms of clustering of population along the coastline, and a complicated inland fault system. We design data-driven earthquake relief zones with a wide variety of inputs, including geological features, population, and public safety. Data-driven zones were generated by solving the p-median problem with the Teitz–Bart algorithm without any a priori knowledge of optimal relief zones. We define the metrics to determine the optimal number of relief zones as a part of the proposed workflow. Finally, we measure the impacts of a tsunami in LA County by comparing data-driven relief zone maps for a case with a tsunami and a case without a tsunami. Our results show that the impact of the tsunami on the relief zones can extend up to 160 km inland from the study area.


2021 ◽  
Author(s):  
Geza Halasz ◽  
Michela Sperti ◽  
Matteo Villani ◽  
Umberto Michelucci ◽  
Piergiuseppe Agostoni ◽  
...  

BACKGROUND Several models have been developed to predict mortality in patients with Covid-19 pneumonia, but only few have demonstrated enough discriminatory capacity. Machine-learning algorithms represent a novel approach for data-driven prediction of clinical outcomes with advantages over statistical modelling. OBJECTIVE To developed the Piacenza score, a Machine-learning based score, to predict 30-day mortality in patients with Covid-19 pneumonia METHODS The study comprised 852 patients with COVID-19 pneumonia, admitted to the Guglielmo da Saliceto Hospital (Italy) from February to November 2020. The patients’ medical history, demographic and clinical data were collected in an electronic health records. The overall patient dataset was randomly splitted into derivation and test cohort. The score was obtained through the Naïve Bayes classifier and externally validated on 86 patients admitted to Centro Cardiologico Monzino (Italy) in February 2020. Using a forward-search algorithm six features were identified: age; mean corpuscular haemoglobin concentration; PaO2/FiO2 ratio; temperature; previous stroke; gender. The Brier index was used to evaluate the ability of ML to stratify and predict observed outcomes. A user-friendly web site available at (https://covid.7hc.tech.) was designed and developed to enable a fast and easy use of the tool by the final user (i.e., the physician). Regarding the customization properties to the Piacenza score, we added a personalized version of the algorithm inside the website, which enables an optimized computation of the mortality risk score for a single patient, when some variables used by the Piacenza score are not available. In this case, the Naïve Bayes classifier is re-trained over the same derivation cohort but using a different set of patient’s characteristics. We also compared the Piacenza score with the 4C score and with a Naïve Bayes algorithm with 14 features chosen a-priori. RESULTS The Piacenza score showed an AUC of 0.78(95% CI 0.74-0.84 Brier-score 0.19) in the internal validation cohort and 0.79(95% CI 0.68-0.89, Brier-score 0.16) in the external validation cohort showing a comparable accuracy respect to the 4C score and to the Naïve Bayes model with a-priori chosen features, which achieved an AUC of 0.78(95% CI 0.73-0.83, Brier-score 0.26) and 0.80(95% CI 0.75-0.86, Brier-score 0.17) respectively. CONCLUSIONS A personalized Machine-learning based score with a purely data driven features selection is feasible and effective to predict mortality in patients with COVID-19 pneumonia.


2019 ◽  
Vol 29 ◽  
Author(s):  
S. de Vos ◽  
S. Patten ◽  
E. C. Wit ◽  
E. H. Bos ◽  
K. J. Wardenaar ◽  
...  

Abstract Aims The mechanisms underlying both depressive and anxiety disorders remain poorly understood. One of the reasons for this is the lack of a valid, evidence-based system to classify persons into specific subtypes based on their depressive and/or anxiety symptomatology. In order to do this without a priori assumptions, non-parametric statistical methods seem the optimal choice. Moreover, to define subtypes according to their symptom profiles and inter-relations between symptoms, network models may be very useful. This study aimed to evaluate the potential usefulness of this approach. Methods A large community sample from the Canadian general population (N = 254 443) was divided into data-driven clusters using non-parametric k-means clustering. Participants were clustered according to their (co)variation around the grand mean on each item of the Kessler Psychological Distress Scale (K10). Next, to evaluate cluster differences, semi-parametric network models were fitted in each cluster and node centrality indices and network density measures were compared. Results A five-cluster model was obtained from the cluster analyses. Network density varied across clusters, and was highest for the cluster of people with the lowest K10 severity ratings. In three cluster networks, depressive symptoms (e.g. feeling depressed, restless, hopeless) had the highest centrality. In the remaining two clusters, symptom networks were characterised by a higher prominence of somatic symptoms (e.g. restlessness, nervousness). Conclusion Finding data-driven subtypes based on psychological distress using non-parametric methods can be a fruitful approach, yielding clusters of persons that differ in illness severity as well as in the structure and strengths of inter-symptom relationships.


2020 ◽  
Vol 8 (1) ◽  
pp. 89-119
Author(s):  
Nathalie Vissers ◽  
Pieter Moors ◽  
Dominique Genin ◽  
Johan Wagemans

Artistic photography is an interesting, but often overlooked, medium within the field of empirical aesthetics. Grounded in an art–science collaboration with art photographer Dominique Genin, this project focused on the relationship between the complexity of a photograph and its aesthetic appeal (beauty, pleasantness, interest). An artistic series of 24 semi-abstract photographs that play with multiple layers, recognisability vs unrecognizability and complexity was specifically created and selected for the project. A large-scale online study with a broad range of individuals (n = 453, varying in age, gender and art expertise) was set up. Exploratory data-driven analyses revealed two clusters of individuals, who responded differently to the photographs. Despite the semi-abstract nature of the photographs, differences seemed to be driven more consistently by the ‘content’ of the photograph than by its complexity levels. No consistent differences were found between clusters in age, gender or art expertise. Together, these results highlight the importance of exploratory, data-driven work in empirical aesthetics to complement and nuance findings from hypotheses-driven studies, as they allow to go further than a priori assumptions, to explore underlying clusters of participants with different response patterns, and to point towards new venues for future research. Data and code for the analyses reported in this article can be found at https://osf.io/2fws6/.


Sign in / Sign up

Export Citation Format

Share Document