scholarly journals Inference and Validation of The Structure of Lotka-Volterra Models

Author(s):  
Eberhard Voit ◽  
Jacob Davis ◽  
Daniel Olivenca

Abstract For close to a century, Lotka-Volterra (LV) models have been used to investigate interactions among populations of different species. For a few species, these investigations are straightforward. However, with the arrival of large and complex microbiomes, unprecedently rich data have become available and await analysis. In particular, these data require us to ask which microbial populations of a mixed community affect other populations, whether these influences are activating or inhibiting and how the interactions change over time. Here we present two new inference strategies for interaction parameters that are based on a new algebraic LV inference (ALVI) method. One strategy uses different survivor profiles of communities grown under similar conditions, while the other pertains to time series data. In addition, we address the question of whether observation data are compliant with the LV structure or require a richer modeling format.

2021 ◽  
Author(s):  
Eberhard Voit ◽  
Jacob Davis ◽  
Daniel Olivenca

For close to a century, Lotka-Volterra (LV) models have been used to investigate interactions among populations of different species. For a few species, these investigations are straightforward. However, with the arrival of large and complex microbiomes, unprecedently rich data have become available and await analysis. In particular, these data require us to ask which microbial populations of a mixed community affect other populations, whether these influences are activating or inhibiting and how the interactions change over time. Here we present two new inference strategies for interaction parameters that are based on a new algebraic LV inference (ALVI) method. One strategy uses different survivor profiles of communities grown under similar conditions, while the other pertains to time series data. In addition, we address the question of whether observation data are compliant with the LV structure or require a richer modeling format.


1968 ◽  
Vol 8 (2) ◽  
pp. 308-309
Author(s):  
Mohammad Irshad Khan

It is alleged that the agricultural output in poor countries responds very little to movements in prices and costs because of subsistence-oriented produc¬tion and self-produced inputs. The work of Gupta and Majid is concerned with the empirical verification of the responsiveness of farmers to prices and marketing policies in a backward region. The authors' analysis of the respon¬siveness of farmers to economic incentives is based on two sets of data (concern¬ing sugarcane, cash crop, and paddy, subsistence crop) collected from the district of Deoria in Eastern U.P. (Utter Pradesh) a chronically foodgrain deficit region in northern India. In one set, they have aggregate time-series data at district level and, in the other, they have obtained data from a survey of five villages selected from 170 villages around Padrauna town in Deoria.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Hitoshi Iuchi ◽  
Michiaki Hamada

Abstract Time-course experiments using parallel sequencers have the potential to uncover gradual changes in cells over time that cannot be observed in a two-point comparison. An essential step in time-series data analysis is the identification of temporal differentially expressed genes (TEGs) under two conditions (e.g. control versus case). Model-based approaches, which are typical TEG detection methods, often set one parameter (e.g. degree or degree of freedom) for one dataset. This approach risks modeling of linearly increasing genes with higher-order functions, or fitting of cyclic gene expression with linear functions, thereby leading to false positives/negatives. Here, we present a Jonckheere–Terpstra–Kendall (JTK)-based non-parametric algorithm for TEG detection. Benchmarks, using simulation data, show that the JTK-based approach outperforms existing methods, especially in long time-series experiments. Additionally, application of JTK in the analysis of time-series RNA-seq data from seven tissue types, across developmental stages in mouse and rat, suggested that the wave pattern contributes to the TEG identification of JTK, not the difference in expression levels. This result suggests that JTK is a suitable algorithm when focusing on expression patterns over time rather than expression levels, such as comparisons between different species. These results show that JTK is an excellent candidate for TEG detection.


2021 ◽  
Author(s):  
Sadnan Al Manir ◽  
Justin Niestroy ◽  
Maxwell Adam Levinson ◽  
Timothy Clark

Introduction: Transparency of computation is a requirement for assessing the validity of computed results and research claims based upon them; and it is essential for access to, assessment, and reuse of computational components. These components may be subject to methodological or other challenges over time. While reference to archived software and/or data is increasingly common in publications, a single machine-interpretable, integrative representation of how results were derived, that supports defeasible reasoning, has been absent. Methods: We developed the Evidence Graph Ontology, EVI, in OWL 2, with a set of inference rules, to provide deep representations of supporting and challenging evidence for computations, services, software, data, and results, across arbitrarily deep networks of computations, in connected or fully distinct processes. EVI integrates FAIR practices on data and software, with important concepts from provenance models, and argumentation theory. It extends PROV for additional expressiveness, with support for defeasible reasoning. EVI treats any com- putational result or component of evidence as a defeasible assertion, supported by a DAG of the computations, software, data, and agents that produced it. Results: We have successfully deployed EVI for very-large-scale predictive analytics on clinical time-series data. Every result may reference its own evidence graph as metadata, which can be extended when subsequent computations are executed. Discussion: Evidence graphs support transparency and defeasible reasoning on results. They are first-class computational objects, and reference the datasets and software from which they are derived. They support fully transparent computation, with challenge and support propagation. The EVI approach may be extended to include instruments, animal models, and critical experimental reagents.


2012 ◽  
Vol 1 (1) ◽  
pp. 10-22
Author(s):  
Nateson C ◽  
Suganya D

The present study seeks to analyse Volatility of popular stock index SENSEX. The present study is based on the closing time series data of SENSEX covering the period from 3rd January 2000, to 30th June 2011. The year 2008 has recorded higher Volatility compared to the other years of the study. Volatility fell in the year 2009 from the high of 2008. The years after were comparatively calmer. In the year 2000, the Volatility was higher signifying enhance market activity. The overall daily Volatility for SENSEX was approximately 1.70 % while the annualized value was approximately 25%-26%. Events Reported around Daily Returns in Excess of +/-5%have also been identified.


2021 ◽  
Author(s):  
Erik Otović ◽  
Marko Njirjak ◽  
Dario Jozinović ◽  
Goran Mauša ◽  
Alberto Michelini ◽  
...  

<p>In this study, we compared the performance of machine learning models trained using transfer learning and those that were trained from scratch - on time series data. Four machine learning models were used for the experiment. Two models were taken from the field of seismology, and the other two are general-purpose models for working with time series data. The accuracy of selected models was systematically observed and analyzed when switching within the same domain of application (seismology), as well as between mutually different domains of application (seismology, speech, medicine, finance). In seismology, we used two databases of local earthquakes (one in counts, and the other with the instrument response removed) and a database of global earthquakes for predicting earthquake magnitude; other datasets targeted classifying spoken words (speech), predicting stock prices (finance) and classifying muscle movement from EMG signals (medicine).<br>In practice, it is very demanding and sometimes impossible to collect datasets of tagged data large enough to successfully train a machine learning model. Therefore, in our experiment, we use reduced data sets of 1,500 and 9,000 data instances to mimic such conditions. Using the same scaled-down datasets, we trained two sets of machine learning models: those that used transfer learning for training and those that were trained from scratch. We compared the performances between pairs of models in order to draw conclusions about the utility of transfer learning. In order to confirm the validity of the obtained results, we repeated the experiments several times and applied statistical tests to confirm the significance of the results. The study shows when, within the set experimental framework, the transfer of knowledge brought improvements in terms of model accuracy and in terms of model convergence rate.<br><br>Our results show that it is possible to achieve better performance and faster convergence by transferring knowledge from the domain of global earthquakes to the domain of local earthquakes; sometimes also vice versa. However, improvements in seismology can sometimes also be achieved by transferring knowledge from medical and audio domains. The results show that the transfer of knowledge between other domains brought even more significant improvements, compared to those within the field of seismology. For example, it has been shown that models in the field of sound recognition have achieved much better performance compared to classical models and that the domain of sound recognition is very compatible with knowledge from other domains. We came to similar conclusions for the domains of medicine and finance. Ultimately, the paper offers suggestions when transfer learning is useful, and the explanations offered can provide a good starting point for knowledge transfer using time series data.</p>


2019 ◽  
Vol 14 (2) ◽  
pp. 182-207 ◽  
Author(s):  
Benoît Faye ◽  
Eric Le Fur

AbstractThis article tests the stability of the main hedonic wine price coefficients over time. We draw on an extensive literature review to identify the most frequently used methodology and define a standard hedonic model. We estimate this model on monthly subsamples of a worldwide auction database of the most commonly exchanged fine wines. This provides, for each attribute, a monthly time series of hedonic coefficients time series data from 2003 to 2014. Using a multivariate autoregressive model, we then study the stability of these coefficients over time and test the existence of structural or cyclical changes related to fluctuations in general price levels. We find that most hedonic coefficients are variable and either exhibit structural or cyclical variations over time. These findings shed doubt on the relevance of both short- and long-run hedonic estimations. (JEL Classifications: C13, C22, D44, G11)


2020 ◽  
Vol 496 (1) ◽  
pp. 629-637
Author(s):  
Ce Yu ◽  
Kun Li ◽  
Shanjiang Tang ◽  
Chao Sun ◽  
Bin Ma ◽  
...  

ABSTRACT Time series data of celestial objects are commonly used to study valuable and unexpected objects such as extrasolar planets and supernova in time domain astronomy. Due to the rapid growth of data volume, traditional manual methods are becoming extremely hard and infeasible for continuously analysing accumulated observation data. To meet such demands, we designed and implemented a special tool named AstroCatR that can efficiently and flexibly reconstruct time series data from large-scale astronomical catalogues. AstroCatR can load original catalogue data from Flexible Image Transport System (FITS) files or data bases, match each item to determine which object it belongs to, and finally produce time series data sets. To support the high-performance parallel processing of large-scale data sets, AstroCatR uses the extract-transform-load (ETL) pre-processing module to create sky zone files and balance the workload. The matching module uses the overlapped indexing method and an in-memory reference table to improve accuracy and performance. The output of AstroCatR can be stored in CSV files or be transformed other into formats as needed. Simultaneously, the module-based software architecture ensures the flexibility and scalability of AstroCatR. We evaluated AstroCatR with actual observation data from The three Antarctic Survey Telescopes (AST3). The experiments demonstrate that AstroCatR can efficiently and flexibly reconstruct all time series data by setting relevant parameters and configuration files. Furthermore, the tool is approximately 3× faster than methods using relational data base management systems at matching massive catalogues.


Sign in / Sign up

Export Citation Format

Share Document