scholarly journals Modelling temperature variations in polar snow using DAISY

1997 ◽  
Vol 43 (143) ◽  
pp. 180-191 ◽  
Author(s):  
Ε. M. Morris ◽  
H. -P. Bader ◽  
P. Weilenmann

AbstractA physics-based snow model has been calibrated using data collected at Halley Bay, Antarctica, during the International Geophysical Year. Variations in snow temperature and density are well-simulated using values for the model parameters within the range reported from other polar field experiments. The effect of uncertainty in the parameter values on the accuracy of the predictions is no greater than the effect of instrumental error in the input data. Thus, this model can be used with parameters determined a priori rather than by optimization. The model has been validated using an independent data set from Halley Bay and then used to estimate 10 m temperatures on the Antarctic Peninsula plateau over the last half-century.

1997 ◽  
Vol 43 (143) ◽  
pp. 180-191 ◽  
Author(s):  
Ε. M. Morris ◽  
H. -P. Bader ◽  
P. Weilenmann

AbstractA physics-based snow model has been calibrated using data collected at Halley Bay, Antarctica, during the International Geophysical Year. Variations in snow temperature and density are well-simulated using values for the model parameters within the range reported from other polar field experiments. The effect of uncertainty in the parameter values on the accuracy of the predictions is no greater than the effect of instrumental error in the input data. Thus, this model can be used with parameters determined a priori rather than by optimization. The model has been validated using an independent data set from Halley Bay and then used to estimate 10 m temperatures on the Antarctic Peninsula plateau over the last half-century.


Geophysics ◽  
2005 ◽  
Vol 70 (1) ◽  
pp. J1-J12 ◽  
Author(s):  
Lopamudra Roy ◽  
Mrinal K. Sen ◽  
Donald D. Blankenship ◽  
Paul L. Stoffa ◽  
Thomas G. Richter

Interpretation of gravity data warrants uncertainty estimation because of its inherent nonuniqueness. Although the uncertainties in model parameters cannot be completely reduced, they can aid in the meaningful interpretation of results. Here we have employed a simulated annealing (SA)–based technique in the inversion of gravity data to derive multilayered earth models consisting of two and three dimensional bodies. In our approach, we assume that the density contrast is known, and we solve for the coordinates or shapes of the causative bodies, resulting in a nonlinear inverse problem. We attempt to sample the model space extensively so as to estimate several equally likely models. We then use all the models sampled by SA to construct an approximate, marginal posterior probability density function (PPD) in model space and several orders of moments. The correlation matrix clearly shows the interdependence of different model parameters and the corresponding trade-offs. Such correlation plots are used to study the effect of a priori information in reducing the uncertainty in the solutions. We also investigate the use of derivative information to obtain better depth resolution and to reduce underlying uncertainties. We applied the technique on two synthetic data sets and an airborne-gravity data set collected over Lake Vostok, East Antarctica, for which a priori constraints were derived from available seismic and radar profiles. The inversion results produced depths of the lake in the survey area along with the thickness of sediments. The resulting uncertainties are interpreted in terms of the experimental geometry and data error.


2019 ◽  
Author(s):  
Wiktor Młynarski ◽  
Michal Hledík ◽  
Thomas R. Sokolowski ◽  
Gašper Tkačik

Normative theories and statistical inference provide complementary approaches for the study of biological systems. A normative theory postulates that organisms have adapted to efficiently solve essential tasks, and proceeds to mathematically work out testable consequences of such optimality; parameters that maximize the hypothesized organismal function can be derived ab initio, without reference to experimental data. In contrast, statistical inference focuses on efficient utilization of data to learn model parameters, without reference to any a priori notion of biological function, utility, or fitness. Traditionally, these two approaches were developed independently and applied separately. Here we unify them in a coherent Bayesian framework that embeds a normative theory into a family of maximum-entropy “optimization priors.” This family defines a smooth interpolation between a data-rich inference regime (characteristic of “bottom-up” statistical models), and a data-limited ab inito prediction regime (characteristic of “top-down” normative theory). We demonstrate the applicability of our framework using data from the visual cortex, the retina, and C. elegans, and argue that the flexibility it affords is essential to address a number of fundamental challenges relating to inference and prediction in complex, high-dimensional biological problems.


2019 ◽  
Vol 67 (5) ◽  
pp. 1453-1485 ◽  
Author(s):  
Shipra Agrawal ◽  
Vashist Avadhanula ◽  
Vineet Goyal ◽  
Assaf Zeevi

We consider a dynamic assortment selection problem where in every round the retailer offers a subset (assortment) of N substitutable products to a consumer, who selects one of these products according to a multinomial logit (MNL) choice model. The retailer observes this choice, and the objective is to dynamically learn the model parameters while optimizing cumulative revenues over a selling horizon of length T. We refer to this exploration–exploitation formulation as the MNL-Bandit problem. Existing methods for this problem follow an explore-then-exploit approach, which estimates parameters to a desired accuracy and then, treating these estimates as if they are the correct parameter values, offers the optimal assortment based on these estimates. These approaches require certain a priori knowledge of “separability,” determined by the true parameters of the underlying MNL model, and this in turn is critical in determining the length of the exploration period. (Separability refers to the distinguishability of the true optimal assortment from the other suboptimal alternatives.) In this paper, we give an efficient algorithm that simultaneously explores and exploits, without a priori knowledge of any problem parameters. Furthermore, the algorithm is adaptive in the sense that its performance is near optimal in the “well-separated” case as well as the general parameter setting where this separation need not hold.


1999 ◽  
Vol 09 (03) ◽  
pp. 195-202 ◽  
Author(s):  
JOSÉ ALFREDO FERREIRA COSTA ◽  
MÁRCIO LUIZ DE ANDRADE NETTO

Determining the structure of data without prior knowledge of the number of clusters or any information about their composition is a problem of interest in many fields, such as image analysis, astrophysics, biology, etc. Partitioning a set of n patterns in a p-dimensional feature space must be done such that those in a given cluster are more similar to each other than the rest. As there are approximately [Formula: see text] possible ways of partitioning the patterns among K clusters, finding the best solution is very hard when n is large. The search space is increased when we have no a priori number of partitions. Although the self-organizing feature map (SOM) can be used to visualize clusters, the automation of knowledge discovery by SOM is a difficult task. This paper proposes region-based image processing methods to post-processing the U-matrix obtained after the unsupervised learning performed by SOM. Mathematical morphology is applied to identify regions of neurons that are similar. The number of regions and their labels are automatically found and they are related to the number of clusters in a multivariate data set. New data can be classified by labeling it according to the best match neuron. Simulations using data sets drawn from finite mixtures of p-variate normal densities are presented as well as related advantages and drawbacks of the method.


Psychometrika ◽  
2020 ◽  
Vol 85 (3) ◽  
pp. 684-715
Author(s):  
Luca Stefanutti ◽  
Debora de Chiusole ◽  
Pasquale Anselmi ◽  
Andrea Spoto

Abstract A probabilistic framework for the polytomous extension of knowledge space theory (KST) is proposed. It consists in a probabilistic model, called polytomous local independence model, that is developed as a generalization of the basic local independence model. The algorithms for computing “maximum likelihood” (ML) and “minimum discrepancy” (MD) estimates of the model parameters have been derived and tested in a simulation study. Results show that the algorithms differ in their capability of recovering the true parameter values. The ML algorithm correctly recovers the true values, regardless of the manipulated variables. This is not totally true for the MD algorithm. Finally, the model has been applied to a real polytomous data set collected in the area of psychological assessment. Results show that it can be successfully applied in practice, paving the way to a number of applications of KST outside the area of knowledge and learning assessment.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. J41-J50 ◽  
Author(s):  
Tim van Zon ◽  
Kabir Roy-Chowdhury

Structural inversion of gravity data — deriving robust images of the subsurface by delineating lithotype boundaries using density anomalies — is an important goal in a range of exploration settings (e.g., ore bodies, salt flanks). Application of conventional inversion techniques in such cases, using [Formula: see text]-norms and regularization, produces smooth results and is thus suboptimal. We investigate an [Formula: see text]-norm-based approach which yields structural images without the need for explicit regularization. The density distribution of the subsurface is modeled with a uniform grid of cells. The density of each cell is inverted by minimizing the [Formula: see text]-norm of the data misfit using linear programming (LP) while satisfying a priori density constraints. The estimate of the noise level in a given data set is used to qualitatively determine an appropriate parameterization. The 2.5D and 3D synthetic tests adequately reconstruct the structure of the test models. The quality of the inversion depends upon a good prior estimation of the minimum depth of the anomalous body. A comparison of our results with one using truncated singular value decomposition (TSVD) on a noisy synthetic data set favors the LP-based method. There are two advantages in using LP for structural inversion of gravity data. First, it offers a natural way to incorporate a priori information regarding the model parameters. Second, it produces subsurface images with sharp boundaries (structure).


Weed Science ◽  
2004 ◽  
Vol 52 (6) ◽  
pp. 1034-1038 ◽  
Author(s):  
David W. Fischer ◽  
R. Gordon Harvey ◽  
Thomas T. Bauman ◽  
Sam Phillips ◽  
Stephen E. Hart ◽  
...  

Variation in crop–weed interference relationships has been shown for a number of crop–weed mixtures and may have an important influence on weed management decision-making. Field experiments were conducted at seven locations over 2 yr to evaluate variation in common lambsquarters interference in field corn and whether a single set of model parameters could be used to estimate corn grain yield loss throughout the northcentral United States. Two coefficients (IandA) of a rectangular hyperbola were estimated for each data set using nonlinear regression analysis. TheIcoefficient represents corn yield loss as weed density approaches zero, andArepresents maximum percent yield loss. Estimates of both coefficients varied between years at Wisconsin, andIvaried between years at Michigan. When locations with similar sample variances were combined, estimates of bothIandAvaried. Common lambsquarters interference caused the greatest corn yield reduction in Michigan (100%) and had the least effect in Minnesota, Nebraska, and Indiana (0% yield loss). Variation inIandAparameters resulted in variation in estimates of a single-year economic threshold (0.32 to 4.17 plants m−1of row). Results of this study fail to support the use of a common yield loss–weed density function for all locations.


Author(s):  
Sang-Woo Park ◽  
Aviroop Mukherjee ◽  
Frank Gross ◽  
Paul P. Jovanis

The detailed analysis of preexisting crash and noncrash data representing an estimated 16 million vehicle miles of travel has revealed strong consistency between crash analysis using data from the 1980s and field experiments conducted in the 1990s. Time of day of driving is associated with crash risk: night and early morning driving has elevated risk in the range of 20% to 70% compared with daytime driving. Overall, 16 of 27 night and early morning driving schedules had elevated risk. Irregular schedules with primarily night and early morning driving had relative risk increases of 30% to 80%. In addition, there remains a persistent finding of increased crash risk associated with hours driving, with risk increases of 30% to more than 80% compared with the first hour of driving. These increases are less than previously reported and are of similar magnitude to the risk increases caused by multiday schedules. Finally, there is some evidence, although it is far from persuasive, that risk increases may be associated with significant off-duty time, in some cases in the range of 24 to 48 h. The implication is that “restart” programs should be approached with caution. Areas for additional research include further studies of crash risk associated with extended off-duty time, closer examination of irregular schedules that better reflect truckload operations, and analysis of irregular schedules with primarily daytime driving (largely nonexistent in this data set) to further explore the effect of variability.


1983 ◽  
Vol 20 (2) ◽  
pp. 405-408 ◽  
Author(s):  
Paul Kabaila

In This paper we answer the following question. Is there any a priori reason for supposing that there is no more than one set of ARMA model parameters minimising the one-step-ahead prediction error when the true system is not in the model set?


Sign in / Sign up

Export Citation Format

Share Document