scholarly journals View on a mechanistic model of Chlorella vulgaris in incubated shake flasks

Author(s):  
Fabian Kuhfuß ◽  
Veronika Gassenmeier ◽  
Sahar Deppe ◽  
George Ifrim ◽  
Tanja Hernández Rodríguez ◽  
...  

Abstract Kinetic growth models are a useful tool for a better understanding of microalgal cultivation and for optimizing cultivation conditions. The evaluation of such models requires experimental data that is laborious to generate in bioreactor settings. The experimental shake flask setting used in this study allows to run 12 experiments at the same time, with 6 individual light intensities and light durations. This way, 54 biomass data sets were generated for the cultivation of the microalgae Chlorella vulgaris. To identify the model parameters, a stepwise parameter estimation procedure was applied. First, light-associated model parameters were estimated using additional measurements of local light intensities at differ heights within medium at different biomass concentrations. Next, substrate related model parameters were estimated, using experiments for which biomass and nitrate data were provided. Afterwards, growth-related model parameters were estimated by application of an extensive cross validation procedure. Graphic abstract

2013 ◽  
Vol 427-429 ◽  
pp. 1506-1509
Author(s):  
Yong Yan Yu

A robust estimation procedure is necessary to estimate the true model parameters in computer vision. Evaluating the multiple-model in the presence of outliers-robust is a fundamentally different task than the single-model problem.Despite there are many diversity multi-model estimation algorithms, it is difficult to pick an effective and advisably approach.So we present a novel quantitative evaluation of multi-model estimation algorithms, efficiency may be evaluated by either examining the asymptotic efficiency of the algorithms or by running them for a series of data sets of increasing size.Thus we create a specifical testing dataset,and introduce a performance metric, Strongest-Intersection.and using the model-aware correctness criterion. Finally, well show the validity of estimation strategy by the Experimention of line-fitting.


Stats ◽  
2018 ◽  
Vol 2 (1) ◽  
pp. 15-31
Author(s):  
Arslan Nasir ◽  
Haitham Yousof ◽  
Farrukh Jamal ◽  
Mustafa Korkmaz

In this work, we introduce a new Burr XII power series class of distributions, which is obtained by compounding exponentiated Burr XII and power series distributions and has a strong physical motivation. The new distribution contains several important lifetime models. We derive explicit expressions for the ordinary and incomplete moments and generating functions. We discuss the maximum likelihood estimation of the model parameters. The maximum likelihood estimation procedure is presented. We assess the performance of the maximum likelihood estimators in terms of biases, standard deviations, and mean square of errors by means of two simulation studies. The usefulness of the new model is illustrated by means of three real data sets. The new proposed models provide consistently better fits than other competitive models for these data sets.


2016 ◽  
Vol 46 (1) ◽  
pp. 88-100 ◽  
Author(s):  
Kana Kamimura ◽  
Barry Gardiner ◽  
Sylvain Dupont ◽  
Dominique Guyon ◽  
Céline Meredieu

Maritime pine (Pinus pinaster Aiton) forests in the Aquitaine region, southwestern France, suffered catastrophic damage from storms Martin (1999) and Klaus (2009), and more damage is expected in the future due to forest structural change and climate change. Thus, developing risk assessment methods is one of the keys to finding forest management strategies to reduce future damage. In this paper, we evaluated two approaches to calculate wind damage risk to individual trees using data from different damage data sets from two storm events. Airflow models were coupled either with a mechanistic model (GALES) or a bias-reduced logistic regression model to discriminate between damaged and undamaged trees. The mechanistic approach was found to successfully discriminate the trees for different storms but only in locations with soil conditions similar to where the model parameters were obtained from previous field experiments. The statistical approach successfully discriminated the trees only when applied to similar data as that used for creating the models, but it did not work at an acceptable level for other data sets. One variable, decade of stand establishment, was a significant variable in all statistical models, suggesting that site preparation and tree establishment could be a key factor related to wind damage in this region.


1999 ◽  
Vol 39 (10-11) ◽  
pp. 193-196
Author(s):  
J. Petersen ◽  
J. G. Petrie

The release of heavy metal species from deposits of solid waste materials originating from minerals processing operations poses a serious environmental risk should such species migrate beyond the boundaries of the deposit into the surrounding environment. Legislation increasingly places the liability for wastes with the operators of the process that generates them. The costs for long-term monitoring and clean-up following a potential critical leakage have to be factored in the overall project plan from the outset. Thus assessment of the potential for a particular waste material to generate a harmful leachate is directly relevant for estimating the environmental risk associated with the planned disposal operation. A rigorous mechanistic model is proposed, which allows prediction of the time-dependent generation of a leachate from a solid mineral waste deposit. Model parameters are obtained from a suitably designed laboratory waste assessment methodology on a relatively small sample of the prospective waste material. The parameters are not specific to the laboratory environment in which they were obtained but are valid also for full-scale heap modelling. In this way the model, combined with the assessment methodology, becomes a powerful tool for meaningful assessment of the risks associated with solid waste disposal strategies.


Water ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 463
Author(s):  
Gopinathan R. Abhijith ◽  
Leonid Kadinski ◽  
Avi Ostfeld

The formation of bacterial regrowth and disinfection by-products is ubiquitous in chlorinated water distribution systems (WDSs) operated with organic loads. A generic, easy-to-use mechanistic model describing the fundamental processes governing the interrelationship between chlorine, total organic carbon (TOC), and bacteria to analyze the spatiotemporal water quality variations in WDSs was developed using EPANET-MSX. The representation of multispecies reactions was simplified to minimize the interdependent model parameters. The physicochemical/biological processes that cannot be experimentally determined were neglected. The effects of source water characteristics and water residence time on controlling bacterial regrowth and Trihalomethane (THM) formation in two well-tested systems under chlorinated and non-chlorinated conditions were analyzed by applying the model. The results established that a 100% increase in the free chlorine concentration and a 50% reduction in the TOC at the source effectuated a 5.87 log scale decrement in the bacteriological activity at the expense of a 60% increase in THM formation. The sensitivity study showed the impact of the operating conditions and the network characteristics in determining parameter sensitivities to model outputs. The maximum specific growth rate constant for bulk phase bacteria was found to be the most sensitive parameter to the predicted bacterial regrowth.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1850
Author(s):  
Rashad A. R. Bantan ◽  
Farrukh Jamal ◽  
Christophe Chesneau ◽  
Mohammed Elgarhy

Unit distributions are commonly used in probability and statistics to describe useful quantities with values between 0 and 1, such as proportions, probabilities, and percentages. Some unit distributions are defined in a natural analytical manner, and the others are derived through the transformation of an existing distribution defined in a greater domain. In this article, we introduce the unit gamma/Gompertz distribution, founded on the inverse-exponential scheme and the gamma/Gompertz distribution. The gamma/Gompertz distribution is known to be a very flexible three-parameter lifetime distribution, and we aim to transpose this flexibility to the unit interval. First, we check this aspect with the analytical behavior of the primary functions. It is shown that the probability density function can be increasing, decreasing, “increasing-decreasing” and “decreasing-increasing”, with pliant asymmetric properties. On the other hand, the hazard rate function has monotonically increasing, decreasing, or constant shapes. We complete the theoretical part with some propositions on stochastic ordering, moments, quantiles, and the reliability coefficient. Practically, to estimate the model parameters from unit data, the maximum likelihood method is used. We present some simulation results to evaluate this method. Two applications using real data sets, one on trade shares and the other on flood levels, demonstrate the importance of the new model when compared to other unit models.


2021 ◽  
Vol 11 (15) ◽  
pp. 6998
Author(s):  
Qiuying Li ◽  
Hoang Pham

Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction.


2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


2018 ◽  
Vol 612 ◽  
pp. A70 ◽  
Author(s):  
J. Olivares ◽  
E. Moraux ◽  
L. M. Sarro ◽  
H. Bouy ◽  
A. Berihuete ◽  
...  

Context. Membership analyses of the DANCe and Tycho + DANCe data sets provide the largest and least contaminated sample of Pleiades candidate members to date. Aims. We aim at reassessing the different proposals for the number surface density of the Pleiades in the light of the new and most complete list of candidate members, and inferring the parameters of the most adequate model. Methods. We compute the Bayesian evidence and Bayes Factors for variations of the classical radial models. These include elliptical symmetry, and luminosity segregation. As a by-product of the model comparison, we obtain posterior distributions for each set of model parameters. Results. We find that the model comparison results depend on the spatial extent of the region used for the analysis. For a circle of 11.5 parsecs around the cluster centre (the most homogeneous and complete region), we find no compelling reason to abandon King’s model, although the Generalised King model introduced here has slightly better fitting properties. Furthermore, we find strong evidence against radially symmetric models when compared to the elliptic extensions. Finally, we find that including mass segregation in the form of luminosity segregation in the J band is strongly supported in all our models. Conclusions. We have put the question of the projected spatial distribution of the Pleiades cluster on a solid probabilistic framework, and inferred its properties using the most exhaustive and least contaminated list of Pleiades candidate members available to date. Our results suggest however that this sample may still lack about 20% of the expected number of cluster members. Therefore, this study should be revised when the completeness and homogeneity of the data can be extended beyond the 11.5 parsecs limit. Such a study will allow for more precise determination of the Pleiades spatial distribution, its tidal radius, ellipticity, number of objects and total mass.


2007 ◽  
Vol 97 (3) ◽  
pp. 2516-2524 ◽  
Author(s):  
Anne C. Smith ◽  
Sylvia Wirth ◽  
Wendy A. Suzuki ◽  
Emery N. Brown

Accurate characterizations of behavior during learning experiments are essential for understanding the neural bases of learning. Whereas learning experiments often give subjects multiple tasks to learn simultaneously, most analyze subject performance separately on each individual task. This analysis strategy ignores the true interleaved presentation order of the tasks and cannot distinguish learning behavior from response preferences that may represent a subject's biases or strategies. We present a Bayesian analysis of a state-space model for characterizing simultaneous learning of multiple tasks and for assessing behavioral biases in learning experiments with interleaved task presentations. Under the Bayesian analysis the posterior probability densities of the model parameters and the learning state are computed using Monte Carlo Markov Chain methods. Measures of learning, including the learning curve, the ideal observer curve, and the learning trial translate directly from our previous likelihood-based state-space model analyses. We compare the Bayesian and current likelihood–based approaches in the analysis of a simulated conditioned T-maze task and of an actual object–place association task. Modeling the interleaved learning feature of the experiments along with the animal's response sequences allows us to disambiguate actual learning from response biases. The implementation of the Bayesian analysis using the WinBUGS software provides an efficient way to test different models without developing a new algorithm for each model. The new state-space model and the Bayesian estimation procedure suggest an improved, computationally efficient approach for accurately characterizing learning in behavioral experiments.


Sign in / Sign up

Export Citation Format

Share Document