scholarly journals Large-scale kinetic metabolic models ofPseudomonas putidafor a consistent design of metabolic engineering strategies

2019 ◽  
Author(s):  
Milenko Tokic ◽  
Ljubisa Miskovic ◽  
Vassily Hatzimanikatis

AbstractA high tolerance ofPseudomonas putidato toxic compounds and its ability to grow on a wide variety of substrates makes it a promising candidate for the industrial production of biofuels and biochemicals. Engineering this organism for improved performances and predicting metabolic responses upon genetic perturbations requires reliable descriptions of its metabolism in the form of stoichiometric and kinetic models. In this work, we developed large-scale kinetic models ofP. putidato predict the metabolic phenotypes and design metabolic engineering interventions for the production of biochemicals. The developed kinetic models contain 775 reactions and 245 metabolites. We started by a gap-filling and thermodynamic curation of iJN1411, the genome-scale model ofP. putidaKT2440. We then applied the redGEM and lumpGEM algorithms to reduce the curated iJN1411 model systematically, and we created three core stoichiometric models of different complexity that describe the central carbon metabolism ofP. putida. Using the medium complexity core model as a scaffold, we employed the ORACLE framework to generate populations of large-scale kinetic models for two studies. In the first study, the developed kinetic models successfully captured the experimentally observed metabolic responses to several single-gene knockouts of a wild-type strain ofP. putidaKT2440 growing on glucose. In the second study, we used the developed models to propose metabolic engineering interventions for improved robustness of this organism to the stress condition of increased ATP demand. Overall, we demonstrated the potential and predictive capabilities of developed kinetic models that allow for rational design and optimization of recombinantP. putidastrains for improved production of biofuels and biochemicals.

2018 ◽  
Author(s):  
Tuure Hameri ◽  
Georgios Fengos ◽  
Meric Ataman ◽  
Ljubisa Miskovic ◽  
Vassily Hatzimanikatis

AbstractLarge-scale kinetic models are used for designing, predicting, and understanding the metabolic responses of living cells. Kinetic models are particularly attractive for the biosynthesis of target molecules in cells as they are typically better than other types of models at capturing the complex cellular biochemistry. Using simpler stoichiometric models as scaffolds, kinetic models are built around a steady-state flux profile and a metabolite concentration vector that are typically determined via optimization. However, as the underlying optimization problem is underdetermined, even after incorporating available experimental omics data, one cannot uniquely determine the operational configuration in terms of metabolic fluxes and metabolite concentrations. As a result, some reactions can operate in either the forward or reverse direction while still agreeing with the observed physiology. Here, we analyze how the underlying uncertainty in intracellular fluxes and concentrations affects predictions of constructed kinetic models and their design in metabolic engineering and systems biology studies. To this end, we integrated the omics data of optimally grownEscherichia coliinto a stoichiometric model and constructed populations of non-linear large-scale kinetic models of alternative steady-state solutions consistent with the physiology of theE. coliaerobic metabolism. We performed metabolic control analysis (MCA) on these models, highlighting that MCA-based metabolic engineering decisions are strongly affected by the selected steady state and appear to be more sensitive to concentration values rather than flux values. To incorporate this into future studies, we propose a workflow for moving towards more reliable and robust predictions that are consistent with all alternative steady-state solutions. This workflow can be applied to all kinetic models to improve the consistency and accuracy of their predictions. Additionally, we show that, irrespective of the alternative steady-state solution, increased activity of phosphofructokinase and decreased ATP maintenance requirements would improve cellular growth of optimally grownE. coli.


2019 ◽  
Author(s):  
Saratram Gopalakrishnan ◽  
Satyakam Dash ◽  
Costas Maranas

AbstractKinetic models predict the metabolic flows by directly linking metabolite concentrations and enzyme levels to reaction fluxes. Robust parameterization of organism-level kinetic models that faithfully reproduce the effect of different genetic or environmental perturbations remains an open challenge due to the intractability of existing algorithms. This paper introduces K-FIT, an accelerated kinetic parameterization workflow that leverages a novel decomposition approach to identify steady-state fluxes in response to genetic perturbations followed by a gradient-based update of kinetic parameters until predictions simultaneously agree with the fluxomic data in all perturbed metabolic networks. The applicability of K-FIT to large-scale models is demonstrated by parameterizing an expanded kinetic model forE. coli(307 reactions and 258 metabolites) using fluxomic data from six mutants. The achieved thousand-fold speed-up afforded by K-FIT over meta-heuristic approaches is transformational enabling follow-up robustness of inference analyses and optimal design of experiments to inform metabolic engineering strategies.


2019 ◽  
Author(s):  
Tuure Hameri ◽  
Georgios Fengos ◽  
Vassily Hatzimanikatis

AbstractSignificant efforts have been made in building large-scale kinetic models of cellular metabolism in the past two decades. However, most kinetic models published to date, remain focused around central carbon pathways or are built aroundad hocreduced models without clear justification on their derivation and usage. Systematic algorithms exist for reducing genome-scale metabolic reconstructions to build thermodynamically feasible and consistently reduced stoichiometric models. It has not been studied previously how network complexity affects the Metabolic Sensitivity Coefficients (MSCs) of large-scale kinetic models build around consistently reduced models. We reduced the iJO1366Escherichia Coligenome-scale metabolic reconstruction (GEM) systematically to build three stoichiometric models of variable size. Since the reduced models are expansions around the core subsystems for which the reduction was performed, the models are modular. We propose a method for scaling up the flux profile and the concentration vector reference steady-states from the smallest model to the larger ones, whilst preserving maximum equivalency. Populations of non-linear kinetic models, preserving similarity in kinetic parameters, were built around the reference steady-states and their MSCs were computed. The analysis of the populations of MSCs for the reduced models evidences that metabolic engineering strategies - independent of network complexity - can be designed using our proposed workflow. These findings suggest that we can successfully construct reduced kinetic models from a GEM, without losing information relevant to the scope of the study. Our proposed workflow can serve as an approach for testing the suitability of a model for answering certain study-specific questions.Author SummaryKinetic models of metabolism are very useful tools for metabolic engineering. However, they are generatedad hocbecause, to our knowledge, there exists no standardized procedure for constructing kinetic models of metabolism. We sought to investigate systematically the effect of model complexity and size on sensitivity characteristics. Hence, we used the redGEM and the lumpGEM algorithms to build the backbone of three consistently and modularly reduced stoichiometric models from the iJO1366 genome-scale model for aerobically grownE.coli. These three models were of increasing complexity in terms of network topology and served as basis for building populations of kinetic models. We proposed for the first time a way for scaling up steady-states of the metabolic fluxes and the metabolite concentrations from one kinetic model to another and developed a workflow for fixing kinetic parameters between the models in order to preserve equivalency. We performed metabolic control analysis (MCA) around the populations of kinetic models and used their MCA control coefficients as measurable outputs to compare the three models. We demonstrated that we can systematically reduce genome-scale models to construct kinetic models of different complexity levels for a phenotype that, independent of network complexity, lead to mostly consistent MCA-based metabolic engineering conclusions.


2016 ◽  
Vol 35 ◽  
pp. 148-159 ◽  
Author(s):  
Stefano Andreozzi ◽  
Anirikh Chakrabarti ◽  
Keng Cher Soh ◽  
Anthony Burgard ◽  
Tae Hoon Yang ◽  
...  

2013 ◽  
Vol 14 (2) ◽  
Author(s):  
Noor Fachrizal

Biomass such as agriculture waste and urban waste are enormous potency as energy resources instead of enviromental problem. organic waste can be converted into energy in the form of liquid fuel, solid, and syngas by using of pyrolysis technique. Pyrolysis process can yield higher liquid form when the process can be drifted into fast and flash response. It can be solved by using microwave heating method. This research is started from developing an experimentation laboratory apparatus of microwave-assisted pyrolysis of biomass energy conversion system, and conducting preliminary experiments for gaining the proof that this method can be established for driving the process properly and safely. Modifying commercial oven into laboratory apparatus has been done, it works safely, and initial experiments have been carried out, process yields bio-oil and charcoal shortly, several parameters are achieved. Some further experiments are still needed for more detail parameters. Theresults may be used to design small-scale continuous model of productionsystem, which then can be developed into large-scale model that applicable for comercial use.


Energies ◽  
2021 ◽  
Vol 14 (10) ◽  
pp. 2760
Author(s):  
Ruiye Li ◽  
Peng Cheng ◽  
Hai Lan ◽  
Weili Li ◽  
David Gerada ◽  
...  

Within large turboalternators, the excessive local temperatures and spatially distributed temperature differences can accelerate the deterioration of electrical insulation as well as lead to deformation of components, which may cause major machine malfunctions. In order to homogenise the stator axial temperature distribution whilst reducing the maximum stator temperature, this paper presents a novel non-uniform radial ventilation ducts design methodology. To reduce the huge computational costs resulting from the large-scale model, the stator is decomposed into several single ventilation duct subsystems (SVDSs) along the axial direction, with each SVDS connected in series with the medium of the air gap flow rate. The calculation of electromagnetic and thermal performances within SVDS are completed by finite element method (FEM) and computational fluid dynamics (CFD), respectively. To improve the optimization efficiency, the radial basis function neural network (RBFNN) model is employed to approximate the finite element analysis, while the novel isometric sampling method (ISM) is designed to trade off the cost and accuracy of the process. It is found that the proposed methodology can provide optimal design schemes of SVDS with uniform axial temperature distribution, and the needed computation cost is markedly reduced. Finally, results based on a 15 MW turboalternator show that the peak temperature can be reduced by 7.3 ∘C (6.4%). The proposed methodology can be applied for the design and optimisation of electromagnetic-thermal coupling of other electrical machines with long axial dimensions.


Processes ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 1257
Author(s):  
Xiaoyong Gao ◽  
Yue Zhao ◽  
Yuhong Wang ◽  
Xin Zuo ◽  
Tao Chen

In this paper, a new Lagrange relaxation based decomposition algorithm for the integrated offshore oil production planning optimization is presented. In our previous study (Gao et al. Computers and Chemical Engineering, 2020, 133, 106674), a multiperiod mixed-integer nonlinear programming (MINLP) model considering both well operation and flow assurance simultaneously had been proposed. However, due to the large-scale nature of the problem, i.e., too many oil wells and long planning time cycle, the optimization problem makes it difficult to get a satisfactory solution in a reasonable time. As an effective method, Lagrange relaxation based decomposition algorithms can provide more compact bounds and thus result in a smaller duality gap. Specifically, Lagrange multiplier is introduced to relax coupling constraints of multi-batch units and thus some moderate scale sub-problems result. Moreover, dual problem is constructed for iteration. As a result, the original integrated large-scale model is decomposed into several single-batch subproblems and solved simultaneously by commercial solvers. Computational results show that the proposed method can reduce the solving time up to 43% or even more. Meanwhile, the planning results are close to those obtained by the original model. Moreover, the larger the problem size, the better the proposed LR algorithm is than the original model.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Giuseppe Giacopelli ◽  
Domenico Tegolo ◽  
Emiliano Spera ◽  
Michele Migliore

AbstractThe brain’s structural connectivity plays a fundamental role in determining how neuron networks generate, process, and transfer information within and between brain regions. The underlying mechanisms are extremely difficult to study experimentally and, in many cases, large-scale model networks are of great help. However, the implementation of these models relies on experimental findings that are often sparse and limited. Their predicting power ultimately depends on how closely a model’s connectivity represents the real system. Here we argue that the data-driven probabilistic rules, widely used to build neuronal network models, may not be appropriate to represent the dynamics of the corresponding biological system. To solve this problem, we propose to use a new mathematical framework able to use sparse and limited experimental data to quantitatively reproduce the structural connectivity of biological brain networks at cellular level.


Sign in / Sign up

Export Citation Format

Share Document