scholarly journals A Cybernetic Approach to Modeling Lipid Metabolism in Mammalian Cells

Processes ◽  
2018 ◽  
Vol 6 (8) ◽  
pp. 126 ◽  
Author(s):  
Lina Aboulmouna ◽  
Shakti Gupta ◽  
Mano Maurya ◽  
Frank DeVilbiss ◽  
Shankar Subramaniam ◽  
...  

The goal-oriented control policies of cybernetic models have been used to predict metabolic phenomena such as the behavior of gene knockout strains, complex substrate uptake patterns, and dynamic metabolic flux distributions. Cybernetic theory builds on the principle that metabolic regulation is driven towards attaining goals that correspond to an organism’s survival or displaying a specific phenotype in response to a stimulus. Here, we have modeled the prostaglandin (PG) metabolism in mouse bone marrow derived macrophage (BMDM) cells stimulated by Kdo2-Lipid A (KLA) and adenosine triphosphate (ATP), using cybernetic control variables. Prostaglandins are a well characterized set of inflammatory lipids derived from arachidonic acid. The transcriptomic and lipidomic data for prostaglandin biosynthesis and conversion were obtained from the LIPID MAPS database. The model parameters were estimated using a two-step hybrid optimization approach. A genetic algorithm was used to determine the population of near optimal parameter values, and a generalized constrained non-linear optimization employing a gradient search method was used to further refine the parameters. We validated our model by predicting an independent data set, the prostaglandin response of KLA primed ATP stimulated BMDM cells. We show that the cybernetic model captures the complex regulation of PG metabolism and provides a reliable description of PG formation.

Author(s):  
Kyungwon Kang ◽  
Hesham A. Rakha

Drivers of merging vehicles decide when to merge by considering surrounding vehicles in adjacent lanes in their deliberation process. Conflicts between drivers of the subject vehicles (i.e., merging vehicles) in an auxiliary lane and lag vehicles in the adjacent lane are typical near freeway on-ramps. This paper models a decision-making process for merging maneuvers that uses a game theoretical approach. The proposed model is based on the noncooperative decision making of two players, that is, drivers of the subject and lag vehicles, without consideration of advanced communication technologies. In the decision-making process, the drivers of the subject vehicles elect to accept gaps, and drivers of lag vehicles either yield or block the action of the subject vehicle. Corresponding payoff functions for two players were formulated to describe their respective maneuvers. To estimate model parameters, a bi-level optimization approach was used. The next generation simulation data set was used for model calibration and validation. The data set defined the moment the game started and was modeled as a continuous sequence of games until a decision is made. The defined merging decision-making model was then validated with an independent data set. The validation results reveal that the proposed model provides considerable prediction accuracy with correct predictions 84% of the time.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


2017 ◽  
Vol 37 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Haluk Ay ◽  
Anthony Luscher ◽  
Carolyn Sommerich

Purpose The purpose of this study is to design and develop a testing device to simulate interaction between human hand–arm dynamics, right-angle (RA) computer-controlled power torque tools and joint-tightening task-related variables. Design/methodology/approach The testing rig can simulate a variety of tools, tasks and operator conditions. The device includes custom data-acquisition electronics and graphical user interface-based software. The simulation of the human hand–arm dynamics is based on the rig’s four-bar mechanism-based design and mechanical components that provide adjustable stiffness (via pneumatic cylinder) and mass (via plates) and non-adjustable damping. The stiffness and mass values used are based on an experimentally validated hand–arm model that includes a database of model parameters. This database is with respect to gender and working posture, corresponding to experienced tool operators from a prior study. Findings The rig measures tool handle force and displacement responses simultaneously. Peak force and displacement coefficients of determination (R2) between rig estimations and human testing measurements were 0.98 and 0.85, respectively, for the same set of tools, tasks and operator conditions. The rig also provides predicted tool operator acceptability ratings, using a data set from a prior study of discomfort in experienced operators during torque tool use. Research limitations/implications Deviations from linearity may influence handle force and displacement measurements. Stiction (Coulomb friction) in the overall rig, as well as in the air cylinder piston, is neglected. The rig’s mechanical damping is not adjustable, despite the fact that human hand–arm damping varies with respect to gender and working posture. Deviations from these assumptions may affect the correlation of the handle force and displacement measurements with those of human testing for the same tool, task and operator conditions. Practical implications This test rig will allow the rapid assessment of the ergonomic performance of DC torque tools, saving considerable time in lineside applications and reducing the risk of worker injury. DC torque tools are an extremely effective way of increasing production rate and improving torque accuracy. Being a complex dynamic system, however, the performance of DC torque tools varies in each application. Changes in worker mass, damping and stiffness, as well as joint stiffness and tool program, make each application unique. This test rig models all of these factors and allows quick assessment. Social implications The use of this tool test rig will help to identify and understand risk factors that contribute to musculoskeletal disorders (MSDs) associated with the use of torque tools. Tool operators are subjected to large impulsive handle reaction forces, as joint torque builds up while tightening a fastener. Repeated exposure to such forces is associated with muscle soreness, fatigue and physical stress which are also risk factors for upper extremity injuries (MSDs; e.g. tendinosis, myofascial pain). Eccentric exercise exertions are known to cause damage to muscle tissue in untrained individuals and affect subsequent performance. Originality/value The rig provides a novel means for quantitative, repeatable dynamic evaluation of RA powered torque tools and objective selection of tightening programs. Compared to current static tool assessment methods, dynamic testing provides a more realistic tool assessment relative to the tool operator’s experience. This may lead to improvements in tool or controller design and reduction in associated musculoskeletal discomfort in operators.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Ratnasekhar Ch ◽  
Guillaume Rey ◽  
Sandipan Ray ◽  
Pawan K. Jha ◽  
Paul C. Driscoll ◽  
...  

AbstractCircadian clocks coordinate mammalian behavior and physiology enabling organisms to anticipate 24-hour cycles. Transcription-translation feedback loops are thought to drive these clocks in most of mammalian cells. However, red blood cells (RBCs), which do not contain a nucleus, and cannot perform transcription or translation, nonetheless exhibit circadian redox rhythms. Here we show human RBCs display circadian regulation of glucose metabolism, which is required to sustain daily redox oscillations. We found daily rhythms of metabolite levels and flux through glycolysis and the pentose phosphate pathway (PPP). We show that inhibition of critical enzymes in either pathway abolished 24-hour rhythms in metabolic flux and redox oscillations, and determined that metabolic oscillations are necessary for redox rhythmicity. Furthermore, metabolic flux rhythms also occur in nucleated cells, and persist when the core transcriptional circadian clockwork is absent in Bmal1 knockouts. Thus, we propose that rhythmic glucose metabolism is an integral process in circadian rhythms.


2021 ◽  
Vol 15 ◽  
pp. 174830262199962
Author(s):  
Patrick O Kano ◽  
Moysey Brio ◽  
Jacob Bailey

The Weeks method for the numerical inversion of the Laplace transform utilizes a Möbius transformation which is parameterized by two real quantities, σ and b. Proper selection of these parameters depends highly on the Laplace space function F( s) and is generally a nontrivial task. In this paper, a convolutional neural network is trained to determine optimal values for these parameters for the specific case of the matrix exponential. The matrix exponential eA is estimated by numerically inverting the corresponding resolvent matrix [Formula: see text] via the Weeks method at [Formula: see text] pairs provided by the network. For illustration, classes of square real matrices of size three to six are studied. For these small matrices, the Cayley-Hamilton theorem and rational approximations can be utilized to obtain values to compare with the results from the network derived estimates. The network learned by minimizing the error of the matrix exponentials from the Weeks method over a large data set spanning [Formula: see text] pairs. Network training using the Jacobi identity as a metric was found to yield a self-contained approach that does not require a truth matrix exponential for comparison.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ryan B. Patterson-Cross ◽  
Ariel J. Levine ◽  
Vilas Menon

Abstract Background Generating and analysing single-cell data has become a widespread approach to examine tissue heterogeneity, and numerous algorithms exist for clustering these datasets to identify putative cell types with shared transcriptomic signatures. However, many of these clustering workflows rely on user-tuned parameter values, tailored to each dataset, to identify a set of biologically relevant clusters. Whereas users often develop their own intuition as to the optimal range of parameters for clustering on each data set, the lack of systematic approaches to identify this range can be daunting to new users of any given workflow. In addition, an optimal parameter set does not guarantee that all clusters are equally well-resolved, given the heterogeneity in transcriptomic signatures in most biological systems. Results Here, we illustrate a subsampling-based approach (chooseR) that simultaneously guides parameter selection and characterizes cluster robustness. Through bootstrapped iterative clustering across a range of parameters, chooseR was used to select parameter values for two distinct clustering workflows (Seurat and scVI). In each case, chooseR identified parameters that produced biologically relevant clusters from both well-characterized (human PBMC) and complex (mouse spinal cord) datasets. Moreover, it provided a simple “robustness score” for each of these clusters, facilitating the assessment of cluster quality. Conclusion chooseR is a simple, conceptually understandable tool that can be used flexibly across clustering algorithms, workflows, and datasets to guide clustering parameter selection and characterize cluster robustness.


2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Helena Mouriño ◽  
Maria Isabel Barão

Missing-data problems are extremely common in practice. To achieve reliable inferential results, we need to take into account this feature of the data. Suppose that the univariate data set under analysis has missing observations. This paper examines the impact of selecting an auxiliary complete data set—whose underlying stochastic process is to some extent interdependent with the former—to improve the efficiency of the estimators for the relevant parameters of the model. The Vector AutoRegressive (VAR) Model has revealed to be an extremely useful tool in capturing the dynamics of bivariate time series. We propose maximum likelihood estimators for the parameters of the VAR(1) Model based on monotone missing data pattern. Estimators’ precision is also derived. Afterwards, we compare the bivariate modelling scheme with its univariate counterpart. More precisely, the univariate data set with missing observations will be modelled by an AutoRegressive Moving Average (ARMA(2,1)) Model. We will also analyse the behaviour of the AutoRegressive Model of order one, AR(1), due to its practical importance. We focus on the mean value of the main stochastic process. By simulation studies, we conclude that the estimator based on the VAR(1) Model is preferable to those derived from the univariate context.


Geophysics ◽  
2016 ◽  
Vol 81 (4) ◽  
pp. U25-U38 ◽  
Author(s):  
Nuno V. da Silva ◽  
Andrew Ratcliffe ◽  
Vetle Vinje ◽  
Graham Conroy

Parameterization lies at the center of anisotropic full-waveform inversion (FWI) with multiparameter updates. This is because FWI aims to update the long and short wavelengths of the perturbations. Thus, it is important that the parameterization accommodates this. Recently, there has been an intensive effort to determine the optimal parameterization, centering the fundamental discussion mainly on the analysis of radiation patterns for each one of these parameterizations, and aiming to determine which is best suited for multiparameter inversion. We have developed a new parameterization in the scope of FWI, based on the concept of kinematically equivalent media, as originally proposed in other areas of seismic data analysis. Our analysis is also based on radiation patterns, as well as the relation between the perturbation of this set of parameters and perturbation in traveltime. The radiation pattern reveals that this parameterization combines some of the characteristics of parameterizations with one velocity and two Thomsen’s parameters and parameterizations using two velocities and one Thomsen’s parameter. The study of perturbation of traveltime with perturbation of model parameters shows that the new parameterization is less ambiguous when relating these quantities in comparison with other more commonly used parameterizations. We have concluded that our new parameterization is well-suited for inverting diving waves, which are of paramount importance to carry out practical FWI successfully. We have demonstrated that the new parameterization produces good inversion results with synthetic and real data examples. In the latter case of the real data example from the Central North Sea, the inverted models show good agreement with the geologic structures, leading to an improvement of the seismic image and flatness of the common image gathers.


Sign in / Sign up

Export Citation Format

Share Document