scholarly journals Web-gLV: A Web Based Platform for Lotka-Volterra Based Modeling and Simulation of Microbial Populations

2019 ◽  
Vol 10 ◽  
Author(s):  
Bhusan K. Kuntal ◽  
Chetan Gadgil ◽  
Sharmila S. Mande

The affordability of high throughput DNA sequencing has allowed us to explore the dynamics of microbial populations in various ecosystems. Mathematical modeling and simulation of such microbiome time series data can help in getting better understanding of bacterial communities. In this paper, we present Web-gLV—a GUI based interactive platform for generalized Lotka-Volterra (gLV) based modeling and simulation of microbial populations. The tool can be used to generate the mathematical models with automatic estimation of parameters and use them to predict future trajectories using numerical simulations. We also demonstrate the utility of our tool on few publicly available datasets. The case studies demonstrate the ease with which the current tool can be used by biologists to model bacterial populations and simulate their dynamics to get biological insights. We expect Web-gLV to be a valuable contribution in the field of ecological modeling and metagenomic systems biology.

Author(s):  
Tobias Lampprecht ◽  
David Salb ◽  
Marek Mauser ◽  
Huub van de Wetering ◽  
Michael Burch ◽  
...  

Formula One races provide a wealth of data worth investigating. Although the time-varying data has a clear structure, it is pretty challenging to analyze it for further properties. Here the focus is on a visual classification for events, drivers, as well as time periods. As a first step, the Formula One data is visually encoded based on a line plot visual metaphor reflecting the dynamic lap times, and finally, a classification of the races based on the visual outcomes gained from these line plots is presented. The visualization tool is web-based and provides several interactively linked views on the data; however, it starts with a calendar-based overview representation. To illustrate the usefulness of the approach, the provided Formula One data from several years is visually explored while the races took place in different locations. The chapter discusses algorithmic, visual, and perceptual limitations that might occur during the visual classification of time-series data such as Formula One races.


2009 ◽  
Vol 63 (3) ◽  
Author(s):  
Michal Čižniar ◽  
Marián Podmajerský ◽  
Tomáš Hirmajer ◽  
Miroslav Fikar ◽  
Abderrazak Latifi

AbstractThe estimation of parameters in semi-empirical models is essential in numerous areas of engineering and applied science. In many cases, these models are described by a set of ordinary-differential equations or by a set of differential-algebraic equations. Due to the presence of non-convexities of functions participating in these equations, current gradient-based optimization methods can guarantee only locally optimal solutions. This deficiency can have a marked impact on the operation of chemical processes from the economical, environmental and safety points of view and it thus motivates the development of global optimization algorithms. This paper presents a global optimization method which guarantees ɛ-convergence to the global solution. The approach consists in the transformation of the dynamic optimization problem into a nonlinear programming problem (NLP) using the method of orthogonal collocation on finite elements. Rigorous convex underestimators of the nonconvex NLP problem are employed within the spatial branch-and-bound method and solved to global optimality. The proposed method was applied to two example problems dealing with parameter estimation from time series data.


2021 ◽  
Author(s):  
Eberhard Voit ◽  
Jacob Davis ◽  
Daniel Olivenca

Abstract For close to a century, Lotka-Volterra (LV) models have been used to investigate interactions among populations of different species. For a few species, these investigations are straightforward. However, with the arrival of large and complex microbiomes, unprecedently rich data have become available and await analysis. In particular, these data require us to ask which microbial populations of a mixed community affect other populations, whether these influences are activating or inhibiting and how the interactions change over time. Here we present two new inference strategies for interaction parameters that are based on a new algebraic LV inference (ALVI) method. One strategy uses different survivor profiles of communities grown under similar conditions, while the other pertains to time series data. In addition, we address the question of whether observation data are compliant with the LV structure or require a richer modeling format.


2016 ◽  
Vol 11 (4) ◽  
pp. 624-633
Author(s):  
Dylan Keon ◽  
◽  
Cherri M. Pancake ◽  
Ben Steinberg ◽  
Harry Yeh ◽  
...  

In spite of advances in numerical modeling and computer power, coastal buildings and infrastructures are still designed and evaluated for tsunami hazards based on parametric criteria with engineering “conservatism,” largely because complex numerical simulations require time and resources in order to obtain adequate results with sufficient resolution. This is especially challenging when conducting multiple scenarios across a variety of probabilistic occurrences of tsunamis. Numerical computations that have high temporal and spatial resolution also yield extremely large datasets, which are necessary for quantifying uncertainties associated with tsunami hazard evaluation. Here, we introduce a new web-based tool, the Data Explorer, which facilitates the exploration and extraction of numerical tsunami simulation data. The underlying concepts are not new, but the Data Explorer is unique in its ability to retrieve time series data from massive output datasets in less than a second, the fact that it runs in a standard web browser, and its user-centric approach. To demonstrate the tool’s performance and utility, two examples of hypothetical cases are presented. Its usability, together with essentially instantaneous retrieval of data, makes simulation-based analysis and subsequent quantification of uncertainties accessible, enabling a path to future design decisions based on science, rather than relying solely on expert judgment.


2021 ◽  
Author(s):  
Eberhard Voit ◽  
Jacob Davis ◽  
Daniel Olivenca

For close to a century, Lotka-Volterra (LV) models have been used to investigate interactions among populations of different species. For a few species, these investigations are straightforward. However, with the arrival of large and complex microbiomes, unprecedently rich data have become available and await analysis. In particular, these data require us to ask which microbial populations of a mixed community affect other populations, whether these influences are activating or inhibiting and how the interactions change over time. Here we present two new inference strategies for interaction parameters that are based on a new algebraic LV inference (ALVI) method. One strategy uses different survivor profiles of communities grown under similar conditions, while the other pertains to time series data. In addition, we address the question of whether observation data are compliant with the LV structure or require a richer modeling format.


mBio ◽  
2018 ◽  
Vol 9 (1) ◽  
Author(s):  
Hidetoshi Inamine ◽  
Stephen P. Ellner ◽  
Peter D. Newell ◽  
Yuan Luo ◽  
Nicolas Buchon ◽  
...  

ABSTRACT A priority in gut microbiome research is to develop methods to investigate ecological processes shaping microbial populations in the host from readily accessible data, such as fecal samples. Here, we demonstrate that these processes can be inferred from the proportion of ingested microorganisms that is egested and their egestion time distribution, by using general mathematical models that link within-host processes to statistics from fecal time series. We apply this framework to Drosophila melanogaster and its gut bacterium Acetobacter tropicalis. Specifically, we investigate changes in their interactions following ingestion of a food bolus containing bacteria in a set of treatments varying the following key parameters: the density of exogenous bacteria ingested by the flies (low/high) and the association status of the host (axenic or monoassociated with A. tropicalis). At 5 h post-ingestion, ~35% of the intact bacterial cells have transited through the gut with the food bolus and ~10% are retained in a viable and culturable state, leaving ~55% that have likely been lysed in the gut. Our models imply that lysis and retention occur over a short spatial range within the gut when the bacteria are ingested from a low density, but more broadly in the host gut when ingested from a high density, by both gnotobiotic and axenic hosts. Our study illustrates how time series data complement the analysis of static abundance patterns to infer ecological processes as bacteria traverse the host. Our approach can be extended to investigate how different bacterial species interact within the host to understand the processes shaping microbial community assembly. IMPORTANCE A major challenge to our understanding of the gut microbiome in animals is that it is profoundly difficult to investigate the fate of ingested microbial cells as they travel through the gut. Here, we created mathematical tools to analyze microbial dynamics in the gut from the temporal pattern of their abundance in fecal samples, i.e., without direct observation of the dynamics, and validated them with Drosophila fruit flies. Our analyses revealed that over 5 h after ingestion, most bacteria have likely died in the host or have been egested as intact cells, while some living cells have been retained in the host. Bacterial lysis or retention occurred across a larger area of the gut when flies ingest bacteria from high densities than when flies ingest bacteria from low densities. Our mathematical tools can be applied to other systems, including the dynamics of gut microbial populations and communities in humans.


2010 ◽  
Vol 219 (4) ◽  
pp. 042034 ◽  
Author(s):  
S Chilingaryan ◽  
A Beglarian ◽  
A Kopmann ◽  
S Vöcking

2016 ◽  
Vol 78 ◽  
pp. 97-105 ◽  
Author(s):  
Joeseph P. Smith ◽  
Timothy S. Hunter ◽  
Anne H. Clites ◽  
Craig A. Stow ◽  
Tad Slawecki ◽  
...  

2021 ◽  
Vol 10 (2) ◽  
pp. 68
Author(s):  
Juan Bacilio Guerrero Escamilla ◽  
Arquímedes Avilés Vargas

This paper presents the elements entailing the building of a panel data model on the basis of both cross-sectional and time series dimensions, as well as the assumptions implemented for the model application; this, with the objective of focusing on the main elements of the panel data modelling, its way of building, the estimation of parameters and their ratification. On the basis of the methodology of operations research, a practical application exercise is made to estimate the number of kidnapping cases in Mexico based on several economic indicators, finding that from the two types of panel data analyzed in this research, the best adjustment is obtained through the random-effects model, and the most meaningful variables are the Gross domestic product growth and the informal employment rate from the period 2010 to 2019 in each of the states. Thus, it is illustrated that panel data modelling present a better adjustment of data than any other type of models such as linear regression and time series analysis.


Sign in / Sign up

Export Citation Format

Share Document