scholarly journals Mathematical Model for Small Size Time Series Data of Bacterial Secondary Metabolic Pathways

2018 ◽  
Vol 12 ◽  
pp. 117793221877507 ◽  
Author(s):  
Daisuke Tominaga ◽  
Hideo Kawaguchi ◽  
Yoshimi Hori ◽  
Tomohisa Hasunuma ◽  
Chiaki Ogino ◽  
...  

Measuring the concentrations of metabolites and estimating the reaction rates of each reaction step consisting of metabolic pathways are significant for an improvement in microorganisms used in maximizing the production of materials. Although the reaction pathway must be identified for such an improvement, doing so is not easy. Numerous reaction steps have been reported; however, the actual reaction steps activated vary or change according to the conditions. Furthermore, to build mathematical models for a dynamical analysis, the reaction mechanisms and parameter values must be known; however, to date, sufficient information has yet to be published for many cases. In addition, experimental observations are expensive. A new mathematical approach that is applicable to small sample data, and that requires no detailed reaction information, is strongly needed. S-system is one such model that can use smaller samples than other ordinary differential equation models. We propose a simplified S-system to apply minimal quantities of samples for a dynamic analysis of the metabolic pathways. We applied the model to the phenyl lactate production pathway of Escherichia coli. The model obtained suggests that actually activated reaction steps and feedback are inhibitions within the pathway.

2019 ◽  
Author(s):  
Christie A. Bahlai ◽  
Elise F. Zipkin

AbstractEnvironmental factors interact with internal rules of population regulation, sometimes perturbing systems to alternate dynamics though changes in parameter values. Yet, pinpointing when such changes occur in naturally fluctuating populations is difficult. An algorithmic approach that can identify the timing and magnitude of parameter shifts would facilitate understanding of abrupt ecological transitions with potential to inform conservation and management of species.The “Dynamic Shift Detector” is an algorithm to identify changes in parameter values governing temporal fluctuations in populations with nonlinear dynamics. The algorithm examines population time series data for the presence, location, and magnitude of parameter shifts. It uses an iterative approach to fitting subsets of time series data, then ranks the fit of break point combinations using model selection, assigning a relative weight to each break. We examined the performance of the Dynamic Shift Detector with simulations and two case studies. Under low environmental/sampling noise, the break point sets selected by the Dynamic Shift Detector contained the true simulated breaks with 70-100% accuracy. The weighting tool generally assigned breaks intentionally placed in simulated data (i.e., true breaks) with weights averaging >0.8 and those due to sampling error (i.e., erroneous breaks) with weights averaging <0.2. In our case study examining an invasion process, the algorithm identified shifts in population cycling associated with variations in resource availability. The shifts identified for the conservation case study highlight a decline process that generally coincided with changing management practices affecting the availability of hostplant resources.When interpreted in the context of species biology, the Dynamic Shift Detector algorithm can aid management decisions and identify critical time periods related to species’ dynamics. In an era of rapid global change, such tools can provide key insights into the conditions under which population parameters, and their corresponding dynamics, can shift.Author SummaryPopulations naturally fluctuate in abundance, and the rules governing these fluctuations are a result of both internal (density dependent) and external (environmental) processes. For these reasons, pinpointing when changes in populations occur is difficult. In this study, we develop a novel break-point analysis tool for population time series data. Using a density dependent model to describe a population’s underlying dynamic process, our tool iterates through all possible break point combinations (i.e., abrupt changes in parameter values) and applies information-theoretic decision tools (i.e. Akaike’s Information Criterion corrected for small sample sizes) to determine best fits. Here, we develop the approach, simulate data under a variety of conditions to demonstrate its utility, and apply the tool to two case studies: an invasion of multicolored Asian ladybeetle and declining monarch butterflies. The Dynamic Shift Detector algorithm identified parameter changes that correspond to known environmental change events in both case studies.


Author(s):  
Cuong Truong Ngoc ◽  
Xiao Xu ◽  
Hwan-Seong Kim ◽  
Duy Anh Nguyen ◽  
Sam-Sang You

This paper deals with three-dimensional (3D) model of competitive Lotka-Volterra equation to investigate nonlinear dynamics and control strategy of container terminal throughput and capacity. Dynamical behaviors are intensely explored by using eigenvalue evaluation, bifurcation analysis, and time-series data. The dynamical analysis is to show the stability with bifurcation of the competition and collaboration of multiple container terminals in the maritime transportation. Based on the chaotic analysis, the sliding mode control theory has been utilized for optimization of port operations under disruptions. Extensive numerical simulations have been conducted to validate the efficacy and reliability of the presented control algorithms. Particularly, the closed-loop system has been assessed through chaotic suppression and synchronization strategies for port management. Finally, the presented fundamental techniques can be utilized to provide managerial insights and solutions on efficient seaport operations that allow more timely and cost-effective decision making for port authorities in such a highly competitive environment.


2010 ◽  
Vol 4 ◽  
pp. BBI.S5983 ◽  
Author(s):  
Daisuke Tominaga

Time series of gene expression often exhibit periodic behavior under the influence of multiple signal pathways, and are represented by a model that incorporates multiple harmonics and noise. Most of these data, which are observed using DNA microarrays, consist of few sampling points in time, but most periodicity detection methods require a relatively large number of sampling points. We have previously developed a detection algorithm based on the discrete Fourier transform and Akaike's information criterion. Here we demonstrate the performance of the algorithm for small-sample time series data through a comparison with conventional and newly proposed periodicity detection methods based on a statistical analysis of the power of harmonics. We show that this method has higher sensitivity for data consisting of multiple harmonics, and is more robust against noise than other methods. Although “combinatorial explosion” occurs for large datasets, the computational time is not a problem for small-sample datasets. The MATLAB/GNU Octave script of the algorithm is available on the author's web site: http://www.cbrc.jp/%7Etominaga/piccolo/ .


2016 ◽  
Vol 55 (10) ◽  
pp. 2165-2180 ◽  
Author(s):  
Takeshi Watanabe ◽  
Takahiro Takamatsu ◽  
Takashi Y. Nakajima

AbstractVariation in surface solar irradiance is investigated using ground-based observation data. The solar irradiance analyzed in this paper is scaled by the solar irradiance at the top of the atmosphere and is thus dimensionless. Three metrics are used to evaluate the variation in solar irradiance: the mean, standard deviation, and sample entropy. Sample entropy is a value representing the complexity of time series data, but it is not often used for investigation of solar irradiance. In analyses of solar irradiance, sample entropy represents the manner of its fluctuation; large sample entropy corresponds to rapid fluctuation and a high ramp rate, and small sample entropy suggests weak or slow fluctuations. The three metrics are used to cluster 47 ground-based observation stations in Japan into groups with similar features of variation in surface solar irradiance. This new approach clarifies regional features of variation in solar irradiance. The results of this study can be applied to renewable-energy engineering.


Entropy ◽  
2018 ◽  
Vol 20 (8) ◽  
pp. 579 ◽  
Author(s):  
Samira Ahmadi ◽  
Nariman Sepehri ◽  
Christine Wu ◽  
Tony Szturm

Sample entropy (SampEn) has been used to quantify the regularity or predictability of human gait signals. There are studies on the appropriate use of this measure for inter-stride spatio-temporal gait variables. However, the sensitivity of this measure to preprocessing of the signal and to variant values of template size (m), tolerance size (r), and sampling rate has not been studied when applied to “whole” gait signals. Whole gait signals are the entire time series data obtained from force or inertial sensors. This study systematically investigates the sensitivity of SampEn of the center of pressure displacement in the mediolateral direction (ML COP-D) to variant parameter values and two pre-processing methods. These two methods are filtering the high-frequency components and resampling the signals to have the same average number of data points per stride. The discriminatory ability of SampEn is studied by comparing treadmill walk only (WO) to dual-task (DT) condition. The results suggest that SampEn maintains the directional difference between two walking conditions across variant parameter values, showing a significant increase from WO to DT condition, especially when signals are low-pass filtered. Moreover, when gait speed is different between test conditions, signals should be low-pass filtered and resampled to have the same average number of data points per stride.


Author(s):  
Ali I. Hashmi ◽  
Bogdan I. Epureanu

A novel method of damage detection for systems exhibiting chaotic dynamics is presented. The algorithm reconstructs variations of system parameters without the need for explicit system equations of motion, or knowledge of the nominal parameter values. The concept of a Sensitivity Vector Field (SVF) is developed. This construct captures geometrical deformations of the dynamical attractor of the system in state space. These fields are collected by the means of Point Cloud Averaging (PCA) applied to discrete time series data from the system under healthy (nominal parameter values) and damaged (variations of the parameters) conditions. Test variations are reconstructed from an optimal basis of the SVF snapshots which is generated by means of proper orthogonal decomposition. The method is applied to two system models, a magneto-elastic oscillator and an atomic force microscope. The method is shown to be highly accurate, and capable of identifying multiple simultaneous variations. The success of the method as applied to an atomic force microscope (AFM) and a magneto-elastic oscillator (MEO) indicates a potential for highly accurate sample readings by exploiting recently observed chaotic vibrations.


2014 ◽  
Vol 22 (2) ◽  
pp. 319-349 ◽  
Author(s):  
Aldeida Aleti ◽  
Irene Moser ◽  
Indika Meedeniya ◽  
Lars Grunske

All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.


2015 ◽  
Vol 32 (4) ◽  
pp. 793-826 ◽  
Author(s):  
Brian D.O. Anderson ◽  
Manfred Deistler ◽  
Elisabeth Felsenstein ◽  
Bernd Funovits ◽  
Lukas Koelbl ◽  
...  

This paper is concerned with the problem of identifiability of the parameters of a high frequency multivariate autoregressive model from mixed frequency time series data. We demonstrate identifiability for generic parameter values using the population second moments of the observations. In addition we display a constructive algorithm for the parameter values and establish the continuity of the mapping attaching the high frequency parameters to these population second moments. These structural results are obtained using two alternative tools viz. extended Yule Walker equations and blocking of the output process. The cases of stock and flow variables, as well as of general linear transformations of high frequency data, are treated. Finally, we briefly discuss how our constructive identifiability results can be used for parameter estimation based on the sample second moments.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Heather A. Harrington ◽  
Kenneth L. Ho ◽  
Nicolette Meshkat

We present a method for rejecting competing models from noisy time-course data that does not rely on parameter inference. First we characterize ordinary differential equation models in only measurable variables using differential-algebra elimination. This procedure gives input-output equations, which serve as invariants for time series data. We develop a model comparison test using linear algebra and statistics to reject incorrect models from their invariants. This algorithm exploits the dynamic properties that are encoded in the structure of the model equations without recourse to parameter values, and, in this sense, the approach is parameter-free. We demonstrate this method by discriminating between different models from mathematical biology.


2019 ◽  
Vol 23 (9) ◽  
pp. 3603-3629 ◽  
Author(s):  
Gabriel C. Rau ◽  
Vincent E. A. Post ◽  
Margaret Shanafield ◽  
Torsten Krekeler ◽  
Eddie W. Banks ◽  
...  

Abstract. Hydraulic head and gradient measurements underpin practically all investigations in hydrogeology. There is sufficient information in the literature to suggest that head measurement errors can impede the reliable detection of flow directions and significantly increase the uncertainty of groundwater flow rate calculations. Yet educational textbooks contain limited content regarding measurement techniques, and studies rarely report on measurement errors. The objective of our study is to review currently accepted standard operating procedures in hydrological research and to determine the smallest head gradients that can be resolved. To this aim, we first systematically investigate the systematic and random measurement errors involved in collecting time-series information on hydraulic head at a given location: (1) geospatial position, (2) point of head, (3) depth to water, and (4) water level time series. Then, by propagating the random errors, we find that with current standard practice, horizontal head gradients <10-4 are resolvable at distances ⪆170 m. Further, it takes extraordinary effort to measure hydraulic head gradients <10-3 over distances <10 m. In reality, accuracy will be worse than our theoretical estimates because of the many possible systematic errors. Regional flow on a scale of kilometres or more can be inferred with current best-practice methods, but processes such as vertical flow within an aquifer cannot be determined until more accurate and precise measurement methods are developed. Finally, we offer a concise set of recommendations for water level, hydraulic head and gradient time-series measurements. We anticipate that our work contributes to progressing the quality of head time-series data in the hydrogeological sciences and provides a starting point for the development of universal measurement protocols for water level data collection.


Sign in / Sign up

Export Citation Format

Share Document