Integration of GRACE (Gravity Recovery and Climate Experiment) data with traditional data sets for a better understanding of the time-dependent water partitioning in African watersheds

Geology ◽  
2011 ◽  
Vol 39 (5) ◽  
pp. 479-482 ◽  
Author(s):  
Mohamed Ahmed ◽  
Mohamed Sultan ◽  
John Wahr ◽  
Eugene Yan ◽  
Adam Milewski ◽  
...  
Author(s):  
Peter Heidrich ◽  
Thomas Götz

Vector-borne diseases can usually be examined with a vector–host model like the [Formula: see text] model. This, however, depends on parameters that contain detailed information about the mosquito population that we usually do not know. For this reason, in this article, we reduce the [Formula: see text] model to an [Formula: see text] model with a time-dependent and periodic transmission rate [Formula: see text]. Since the living conditions of the mosquitos depend on the local weather conditions, meteorological data sets flow into the model in order to achieve a more realistic behavior. The developed [Formula: see text] model is adapted to existing data sets of hospitalized dengue cases in Jakarta (Indonesia) and Colombo (Sri Lanka) using numerical optimization based on Pontryagin’s maximum principle. A previous data analysis shows that the results of this parameter fit are within a realistic range and thus allow further investigations. Based on this, various simulations are carried out and the prediction quality of the model is examined.


2009 ◽  
Author(s):  
T. J. Jackson ◽  
J. C. Shi ◽  
R. Bindlish ◽  
M. Cosh ◽  
L. Chai ◽  
...  

Paleobiology ◽  
1978 ◽  
Vol 4 (2) ◽  
pp. 135-149 ◽  
Author(s):  
Howard R. Lasker

The accuracy of numerical summaries of data from the fossil record has been hotly contested in recent years. In this paper I present a computer simulation which mimics the process of preservation and the resultant loss of data in survivorship records. The simulation accepts as its input data a hypothetical “original” record and a time dependent model of preservational loss. These parameters are used to generate a “preserved” record, and the “original” and “preserved” data sets are then compared. Eleven hypothetical original records having different patterns of diversity and turnover were “preserved” in this manner. Indices of diversity, origination, extinction, turnover, and longevity were evaluated for each of five different models of preservational bias. All of the indices behaved in a single characteristic fashion. Taxonomic data that contained large fluctuations (of the order 100%) were preserved accurately. Similarly the large scale changes in preserved records matched original distributions. Records with small fluctuations (30%) were variably preserved and when such fluctuations were present in the preserved record they did not always correlate with events in the original record. Longevities were more accurately preserved than were other forms of taxonomic data. The results are believed to reflect the levels of accuracy obtainable from generic and familial data and suggest ways in which data sets warranting more detailed study may be singled out.


2018 ◽  
Author(s):  
Yen Ting Lin ◽  
Nicolas E. Buchler

Understanding how stochastic gene expression is regulated in biological systems using snapshots of single-cell transcripts requires state-of-the-art methods of computational analysis and statistical inference. A Bayesian approach to statistical inference is the most complete method for model selection and uncertainty quantification of kinetic parameters from single-cell data. This approach is impractical because current numerical algorithms are too slow to handle typical models of gene expression. To solve this problem, we first show that time-dependent mRNA distributions of discrete-state models of gene expression are dynamic Poisson mixtures, whose mixing kernels are characterized by a piece-wise deterministic Markov process. We combined this analytical result with a kinetic Monte Carlo algorithm to create a hybrid numerical method that accelerates the calculation of time-dependent mRNA distributions by 1000-fold compared to current methods. We then integrated the hybrid algorithm into an existing Monte Carlo sampler to estimate the Bayesian posterior distribution of many different, competing models in a reasonable amount of time. We validated our method of accelerated Bayesian inference on several synthetic data sets. Our results show that kinetic parameters can be reasonably constrained for modestly sampled data sets, if the model is known a priori. If the model is unknown,the Bayesian evidence can be used to rigorously quantify the likelihood of a model relative to other models from the data. We demonstrate that Bayesian evidence selects the true model and outperforms approximate metrics, e.g., Bayesian Information Criterion (BIC) or Akaike Information Criterion (AIC), often used for model selection.


Author(s):  
Emily Coco ◽  
Radu Iovita

Abstract Archaeologists typically define cultural areas on the basis of similarities between the types of material culture present in sites. The similarity is assessed in order of discovery, with newer sites being evaluated against older ones. Despite evidence for time-dependent site loss due to taphonomy, little attention has been paid to how this impacts archaeological interpretations about the spatial extents of material culture similarity. This paper tests the hypothesis that spatially incomplete data sets result in detection of larger regions of similarity. To avoid assumptions of cultural processes, we apply subsampling algorithms to a naturally occurring, spatially distributed dataset of soil types. We show that there is a negative relationship between the percentage of points used to evaluate similarity across space and the absolute distances to the first minimum in similarity for soil classifications at multiple spatial scales. This negative relationship indicates that incomplete spatial data sets lead to an overestimation of the area over which things are similar. Moreover, the location of the point from which the calculation begins can determine the size of the region of similarity. This has important implications for how we interpret the spatial extent of similarity in material culture over large distances in prehistory.


Author(s):  
K. N. Makris ◽  
I. Vonta

This paper deals with the presentation and study of alternative coupling techniques for maximum and minimum values between data sets, namely the problem which is examined in this work is the possible appearance of maximum or minimum values between data sets in the same or neighboring time points. The data can be time-dependent (time series) or non-time-dependent. In this work, the analysis is focused on time series and novel indices are defined in order to measure whether the values of N sets of data display in terms of time, the maximum or minimum values at the same instances or at very close instances. For this purpose, two methods will be compared, one direct method and one indirect method. The indirect method is based on Matrices of dimensionless indicators which are denoted by [μ][MKN], and the direct method is based on a variance-type measure which is denoted by [V][MKN].


2018 ◽  
Vol 63 (4) ◽  
pp. 361-376 ◽  
Author(s):  
Sajjad Farashi

Abstract Correct interpretation of neural mechanisms depends on the accurate detection of neuronal activities, which become visible as spikes in the electrical activity of neurons. In the present work, a novel entropy based method is proposed for spike detection which employs the fact that transient spike events change the entropy level of the neural time series. In this regard, the time-dependent entropy method can be used for detecting spike times, where the entropy of a selected segment of a neural time series, using a sliding window approach, is calculated and the time of the events are highlighted by sharp peaks in the output of the time-dependent entropy method. It is shown that the length of the sliding window determines the resolution of the time series in entropy space, therefore, the calculation is performed with a different window length for obtaining a multiresolution transform. The final decision threshold for detecting spike events is applied to the point-wise product of the time dependent entropy calculations with different resolutions. The proposed detection method has been assessed using several simulated and real neural data sets. The results show that the proposed method detects spikes in their exact times while compared with other traditional methods, relatively lower false alarm rate is obtained.


2010 ◽  
Vol 73 (2) ◽  
pp. 200-212 ◽  
Author(s):  
Karolien Scheerlinck ◽  
Bernard De Baets ◽  
Ivan Stefanov ◽  
Veerle Fievez

Sign in / Sign up

Export Citation Format

Share Document