data worth
Recently Published Documents


TOTAL DOCUMENTS

59
(FIVE YEARS 15)

H-INDEX

15
(FIVE YEARS 2)

2022 ◽  
Vol 65 (1) ◽  
pp. 17-19
Author(s):  
Keith Kirkpatrick

What is your private data worth, to you and to the companies willing to pay you for it?


Author(s):  
Brian C. Brajcich ◽  
Bryan E. Palis ◽  
Ryan McCabe ◽  
Leticia Nogueira ◽  
Daniel J. Boffa ◽  
...  
Keyword(s):  

Author(s):  
Brian C. Brajcich ◽  
Bryan E. Palis ◽  
Ryan McCabe ◽  
Leticia Nogueira ◽  
Daniel J. Boffa ◽  
...  
Keyword(s):  

Ground Water ◽  
2021 ◽  
Author(s):  
Moritz Gosses ◽  
Thomas Wöhling
Keyword(s):  

2021 ◽  
Author(s):  
Cécile Coulon ◽  
Alexandre Pryet ◽  
Jean-Michel Lemieux

<p>In coastal areas, seawater intrusion is a main driver of groundwater salinization and numerical models are widely used to support sustainable groundwater management. Sharp interface models, in which mixing between freshwater and seawater is not explicitly simulated, have fast run times which enable the implementation of parameter estimation and uncertainty analysis. These are essential steps for decision-support modeling, however their implementation in sharp interface models has remained limited. Few guidelines exist regarding which observations to use, and what processing and weighting strategies to employ. We developed a data assimilation framework for a regional, sharp interface model designed for management purposes. We built a sharp interface model for an island aquifer using the SWI2 package for MODFLOW. We then extracted freshwater head observations from shallow wells, pumping wells and deep open wells, and observations of the seawater-freshwater interface from deep open wells, time-domain electromagnetic (TDEM) and electrical resistivity tomography (ERT) surveys. After quantification of measurement uncertainties, parameter estimation was conducted with PEST and a data worth analysis was carried out using a linear approach. Model residuals provided insight on the potential of different observation groups to constrain parameter estimation. The data worth analysis provided insight on these groups’ importance in reducing the uncertainty of model forecasts. Overall a satisfying fit was obtained between simulated and observed data, but observations from deep open wells were biased. While observations from deep open wells and geophysical surveys had a low signal-to-noise ratio, parameter estimation effectively reduced predictive uncertainty. Interface observations, especially from geophysical surveys, were essential to reduce the uncertainty of model forecasts. The use of different types of observations is discussed and recommendations are provided for future data collection strategies in coastal aquifers. This framework was developed in the Magdalen Islands (Quebec, Canada) and could be carried out more systematically for sharp interface seawater intrusion modeling.</p>


2020 ◽  
Author(s):  
Moritz Gosses ◽  
Thomas Wöhling

<p>Physically-based groundwater models allow highly detailed spatial resolution, parameterization and process representation, among other advantages. Unfortunately, their size and complexity make many model applications computationally demanding. This is especially problematic for uncertainty and data worth analysis methods, which often require many model runs.</p><p>To alleviate the problem of high computational demand for the application of groundwater models for data worth analysis, we combine two different solutions:</p><ol><li>a) the use of surrogate models as faster alternatives to a complex model, and</li> <li>b) a robust data worth analysis method that is based on linear predictive uncertainty estimation, coupled with highly efficient null-space Monte Carlo techniques.</li> </ol><p>We compare the performance of a complex benchmark model of a real-world aquifer in New Zealand to two different surrogate models: a spatially and parametrically simplified version of the complex model, and a projection-based surrogate model created with proper orthogonal decomposition (POD). We generate predictive uncertainty estimates with all three models using linearization techniques implemented in the PEST Toolbox (Doherty 2016) and calculate the worth of existing, “future” and “parametric” data in relation to predictive uncertainty. To somewhat account for non-uniqueness of the model parameters, we use null-space Monte Carlo methods (Doherty 2016) to efficiently generate a multitude of calibrated model parameter sets. These are used to compute the variability of the data worth estimates generated by the three models.</p><p>Comparison between the results of the complex benchmark model and the two surrogates show good agreement for both surrogates in estimating the worth of the existing data sets for various model predictions. The simplified surrogate model shows difficulties in estimating worth of “future” data and is unable to reproduce “parametric” data worth due to its simplification in parameter representation. The POD model was able to successfully reproduce both “future” and “parametric” data worth for different predictions. Many of its data worth estimates exhibit a high variance, though, demonstrating the need of robust data worth methods as presented here which (to some degree) can account for parameter non-uniqueness.</p><p> </p><p>Literature:</p><p>Doherty, J., 2016. PEST: Model-Independent Parameter Estimation - User Manual. Watermark Numerical Computing, 6th Edition.</p>


2020 ◽  
Author(s):  
Falk Heße ◽  
Lars Isachsen ◽  
Sebastian Müller ◽  
Attinger Sabine

<p><span>Characterizing the subsurface of our planet is an important task. Yet compared to many other fields, the characterization of the subsurface is always burdened by large uncertainties. These uncertainties are caused by the general lack of data and the large spatial variability of many subsurface properties. </span><span>Due to their </span><span>comparably </span><span>low costs, pumping test</span><span>s</span><span> are regularly applied for the characterization of groundwater aquifers. The </span><span>classic</span><span> approach is to </span><span>identify the parameters of some conceptual subsurface model</span> <span>by means of curve </span><span>fit</span><span>ting</span><span> some analytical expression </span><span>to the measured drawdown.</span> <span>One of the drawbacks of classic analyzation techniques of pumping tests is the assumption of the existence of a single representative parameter value for the whole aquifer. Consequently, they cannot account for spatial heterogeneities. To address this limitation, a number of studies have proposed extensions of both Thiem’s and Theis’ formula. Using these extensions, it is possible to estimate geostatistical parameters like the mean, variance and correlation length of a heterogeneous conductivity field from pumping tests.</span></p><p><span>W</span><span>hile these methods have demonstrated their ability to estimate </span><span>such</span><span> geostatistical parameters, their data worth has </span><span>rarely</span><span> been investigated within a Bayesian framework. This is particularly relevant since recent developments in the field of Bayesian inference facilitate the derivation of informative prior distributions for these parameters. </span><span>Here, informative means that the prior is</span> <span>based on currently available background data </span><span>and therefore may be able to substantially influence the posterior distribution</span><span>.</span> <span>If this is the case,</span><span> the actual data worth of pumping tests, as well as other subsurface characterization methods, may be lower than assumed.</span></p><p><span>To investigate this possibility, we implemented a series of numerical pumping tests in a synthetic model based on the Herten aquifer. Using informative prior distributions, we derived the posterior distributions over the </span><span>mean, variance and correlation length of </span><span>the synthetic</span><span> heterogeneous conductivity field. </span><span>Our results show that for mean and variance, we already get a substantially lowered data worth for pumping tests when using informative prior distributions, whereas the estimation of the correlation length remains mostly unaffected. These results suggest that with an increasing amount of background data, the data worth of pumping tests may fall </span><span>even lower, meaning that more informative techniques for subsurface characterization will be needed in the future.</span></p><p> </p><p> </p>


Water ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 736
Author(s):  
Anis Younes ◽  
Qian Shao ◽  
Thierry Alex Mara ◽  
Husam Musa Baalousha ◽  
Marwan Fahs

Accurate simulation of flow and contaminant transport processes through unsaturated soils requires adequate knowledge of the soil parameters. This study deals with the hydraulic characterization of soils using laboratory experiments. A new strategy is developed by combining global sensitivity analysis (GSA) and Bayesian data-worth analysis (DWA) to obtain efficient data that ensure a good estimation of the soil properties. The strategy is applied for the estimation of soil properties from a laboratory infiltration experiment. Results of this study show that GSA allows identification of regions and periods of high sensitivity of each parameter and thereby, the observations prone to contain information for a successful calibration. Further, the sensitivity depicts a nonlinear behavior with regions of high influence and regions of weak influence inside the parameter space. Bayesian DWA, performed a priori, allows to quantify the improvement of the posterior uncertainty of the estimated parameters when adding a type of measurement. The results reveal that an accurate estimation of the soil properties can be obtained if the target parameter values are located in the regions of high influence in the parameter space.


Author(s):  
Tobias Lampprecht ◽  
David Salb ◽  
Marek Mauser ◽  
Huub van de Wetering ◽  
Michael Burch ◽  
...  

Formula One races provide a wealth of data worth investigating. Although the time-varying data has a clear structure, it is pretty challenging to analyze it for further properties. Here the focus is on a visual classification for events, drivers, as well as time periods. As a first step, the Formula One data is visually encoded based on a line plot visual metaphor reflecting the dynamic lap times, and finally, a classification of the races based on the visual outcomes gained from these line plots is presented. The visualization tool is web-based and provides several interactively linked views on the data; however, it starts with a calendar-based overview representation. To illustrate the usefulness of the approach, the provided Formula One data from several years is visually explored while the races took place in different locations. The chapter discusses algorithmic, visual, and perceptual limitations that might occur during the visual classification of time-series data such as Formula One races.


2020 ◽  
Vol 228 ◽  
pp. 103554
Author(s):  
E. Essouayed ◽  
E. Verardo ◽  
A. Pryet ◽  
R.L. Chassagne ◽  
O. Atteia

Sign in / Sign up

Export Citation Format

Share Document