scholarly journals StressGenePred: a twin prediction model architecture for classifying the stress types of samples and discovering stress-related genes in arabidopsis

BMC Genomics ◽  
2019 ◽  
Vol 20 (S11) ◽  
Author(s):  
Dongwon Kang ◽  
Hongryul Ahn ◽  
Sangseon Lee ◽  
Chai-Jin Lee ◽  
Jihye Hur ◽  
...  

Abstract Background Recently, a number of studies have been conducted to investigate how plants respond to stress at the cellular molecular level by measuring gene expression profiles over time. As a result, a set of time-series gene expression data for the stress response are available in databases. With the data, an integrated analysis of multiple stresses is possible, which identifies stress-responsive genes with higher specificity because considering multiple stress can capture the effect of interference between stresses. To analyze such data, a machine learning model needs to be built. Results In this study, we developed StressGenePred, a neural network-based machine learning method, to integrate time-series transcriptome data of multiple stress types. StressGenePred is designed to detect single stress-specific biomarker genes by using a simple feature embedding method, a twin neural network model, and Confident Multiple Choice Learning (CMCL) loss. The twin neural network model consists of a biomarker gene discovery and a stress type prediction model that share the same logical layer to reduce training complexity. The CMCL loss is used to make the twin model select biomarker genes that respond specifically to a single stress. In experiments using Arabidopsis gene expression data for four major environmental stresses, such as heat, cold, salt, and drought, StressGenePred classified the types of stress more accurately than the limma feature embedding method and the support vector machine and random forest classification methods. In addition, StressGenePred discovered known stress-related genes with higher specificity than the Fisher method. Conclusions StressGenePred is a machine learning method for identifying stress-related genes and predicting stress types for an integrated analysis of multiple stress time-series transcriptome data. This method can be used to other phenotype-gene associated studies.

2017 ◽  
Author(s):  
Anthony Szedlak ◽  
Spencer Sims ◽  
Nicholas Smith ◽  
Giovanni Paternostro ◽  
Carlo Piermarocchi

AbstractModern time series gene expression and other omics data sets have enabled unprecedented resolution of the dynamics of cellular processes such as cell cycle and response to pharmaceutical compounds. In anticipation of the proliferation of time series data sets in the near future, we use the Hopfield model, a recurrent neural network based on spin glasses, to model the dynamics of cell cycle in HeLa (human cervical cancer) and S. cerevisiae cells. We study some of the rich dynamical properties of these cyclic Hopfield systems, including the ability of populations of simulated cells to recreate experimental expression data and the effects of noise on the dynamics. Next, we use a genetic algorithm to identify sets of genes which, when selectively inhibited by local external fields representing gene silencing compounds such as kinase inhibitors, disrupt the encoded cell cycle. We find, for example, that inhibiting the set of four kinases BRD4, MAPK1, NEK7, and YES1 in HeLa cells causes simulated cells to accumulate in the M phase. Finally, we suggest possible improvements and extensions to our model.Author SummaryCell cycle – the process in which a parent cell replicates its DNA and divides into two daughter cells – is an upregulated process in many forms of cancer. Identifying gene inhibition targets to regulate cell cycle is important to the development of effective therapies. Although modern high throughput techniques offer unprecedented resolution of the molecular details of biological processes like cell cycle, analyzing the vast quantities of the resulting experimental data and extracting actionable information remains a formidable task. Here, we create a dynamical model of the process of cell cycle using the Hopfield model (a type of recurrent neural network) and gene expression data from human cervical cancer cells and yeast cells. We find that the model recreates the oscillations observed in experimental data. Tuning the level of noise (representing the inherent randomness in gene expression and regulation) to the “edge of chaos” is crucial for the proper behavior of the system. We then use this model to identify potential gene targets for disrupting the process of cell cycle. This method could be applied to other time series data sets and used to predict the effects of untested targeted perturbations.


2019 ◽  
Vol 14 (6) ◽  
pp. 551-561
Author(s):  
Shengxian Cao ◽  
Yu Wang ◽  
Zhenhao Tang

Background:Time series expression data of genes contain relations among different genes, which are difficult to model precisely. Slime-forming bacteria is one of the three major harmful bacteria types in industrial circulating cooling water systems.Objective:This study aimed at constructing gene regulation network(GRN) for slime-forming bacteria to understand the microbial fouling mechanism.Methods:For this purpose, an Adaptive Elman Neural Network (AENN) to reveal the relationships among genes using gene expression time series is proposed. The parameters of Elman neural network were optimized adaptively by a Genetic Algorithm (GA). And a Pearson correlation analysis is applied to discover the relationships among genes. In addition, the gene expression data of slime-forming bacteria by transcriptome gene sequencing was presented.Results:To evaluate our proposed method, we compared several alternative data-driven approaches, including a Neural Fuzzy Recurrent Network (NFRN), a basic Elman Neural Network (ENN), and an ensemble network. The experimental results of simulated and real datasets demonstrate that the proposed approach has a promising performance for modeling Gene Regulation Networks (GRNs). We also applied the proposed method for the GRN construction of slime-forming bacteria and at last a GRN for 6 genes was constructed.Conclusion:The proposed GRN construction method can effectively extract the regulations among genes. This is also the first report to construct the GRN for slime-forming bacteria.


2020 ◽  
Vol 36 (Supplement_2) ◽  
pp. i762-i769
Author(s):  
Shohag Barman ◽  
Yung-Keun Kwon

Abstract Summary In systems biology, it is challenging to accurately infer a regulatory network from time-series gene expression data, and a variety of methods have been proposed. Most of them were computationally inefficient in inferring very large networks, though, because of the increasing number of candidate regulatory genes. Although a recent approach called GABNI (genetic algorithm-based Boolean network inference) was presented to resolve this problem using a genetic algorithm, there is room for performance improvement because it employed a limited representation model of regulatory functions. In this regard, we devised a novel genetic algorithm combined with a neural network for the Boolean network inference, where a neural network is used to represent the regulatory function instead of an incomplete Boolean truth table used in the GABNI. In addition, our new method extended the range of the time-step lag parameter value between the regulatory and the target genes for more flexible representation of the regulatory function. Extensive simulations with the gene expression datasets of the artificial and real networks were conducted to compare our method with five well-known existing methods including GABNI. Our proposed method significantly outperformed them in terms of both structural and dynamics accuracy. Conclusion Our method can be a promising tool to infer a large-scale Boolean regulatory network from time-series gene expression data. Availability and implementation The source code is freely available at https://github.com/kwon-uou/NNBNI. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Natal A. W. van Riel ◽  
Christian A. Tiemann ◽  
Peter A. J. Hilbers ◽  
Albert K. Groen

Temporal multi-omics data can provide information about the dynamics of disease development and therapeutic response. However, statistical analysis of high-dimensional time-series data is challenging. Here we develop a novel approach to model temporal metabolomic and transcriptomic data by combining machine learning with metabolic models. ADAPT (Analysis of Dynamic Adaptations in Parameter Trajectories) performs metabolic trajectory modeling by introducing time-dependent parameters in differential equation models of metabolic systems. ADAPT translates structural uncertainty in the model, such as missing information about regulation, into a parameter estimation problem that is solved by iterative learning. We have now extended ADAPT to include both metabolic and transcriptomic time-series data by introducing a regularization function in the learning algorithm. The ADAPT learning algorithm was (re)formulated as a multi-objective optimization problem in which the estimation of trajectories of metabolic parameters is constrained by the metabolite data and refined by gene expression data. ADAPT was applied to a model of hepatic lipid and plasma lipoprotein metabolism to predict metabolic adaptations that are induced upon pharmacological treatment of mice by a Liver X receptor (LXR) agonist. We investigated the excessive accumulation of triglycerides (TG) in the liver resulting in the development of hepatic steatosis. ADAPT predicted that hepatic TG accumulation after LXR activation originates for 80% from an increased influx of free fatty acids. The model also correctly estimated that TG was stored in the cytosol rather than transferred to nascent very-low density lipoproteins. Through model-based integration of temporal metabolic and gene expression data we discovered that increased free fatty acid influx instead of de novo lipogenesis is the main driver of LXR-induced hepatic steatosis. This study illustrates how ADAPT provides estimates for biomedically important parameters that cannot be measured directly, explaining (side-)effects of pharmacological treatment with LXR agonists.


2015 ◽  
Vol 11 (11) ◽  
pp. 3137-3148
Author(s):  
Nazanin Hosseinkhan ◽  
Peyman Zarrineh ◽  
Hassan Rokni-Zadeh ◽  
Mohammad Reza Ashouri ◽  
Ali Masoudi-Nejad

Gene co-expression analysis is one of the main aspects of systems biology that uses high-throughput gene expression data.


2021 ◽  
pp. 190-200
Author(s):  
Lesia Mochurad ◽  
Yaroslav Hladun

The paper considers the method for analysis of a psychophysical state of a person on psychomotor indicators – finger tapping test. The app for mobile phone that generalizes the classic tapping test is developed for experiments. Developed tool allows collecting samples and analyzing them like individual experiments and like dataset as a whole. The data based on statistical methods and optimization of hyperparameters is investigated for anomalies, and an algorithm for reducing their number is developed. The machine learning model is used to predict different features of the dataset. These experiments demonstrate the data structure obtained using finger tapping test. As a result, we gained knowledge of how to conduct experiments for better generalization of the model in future. A method for removing anomalies is developed and it can be used in further research to increase an accuracy of the model. Developed model is a multilayer recurrent neural network that works well with the classification of time series. Error of model learning on a synthetic dataset is 1.5% and on a real data from similar distribution is 5%.


Cell Cycle ◽  
2018 ◽  
Vol 17 (4) ◽  
pp. 486-491 ◽  
Author(s):  
Nicolas Borisov ◽  
Victor Tkachev ◽  
Maria Suntsova ◽  
Olga Kovalchuk ◽  
Alex Zhavoronkov ◽  
...  

2021 ◽  
Vol 118 (40) ◽  
pp. e2026053118
Author(s):  
Miles Cranmer ◽  
Daniel Tamayo ◽  
Hanno Rein ◽  
Peter Battaglia ◽  
Samuel Hadden ◽  
...  

We introduce a Bayesian neural network model that can accurately predict not only if, but also when a compact planetary system with three or more planets will go unstable. Our model, trained directly from short N-body time series of raw orbital elements, is more than two orders of magnitude more accurate at predicting instability times than analytical estimators, while also reducing the bias of existing machine learning algorithms by nearly a factor of three. Despite being trained on compact resonant and near-resonant three-planet configurations, the model demonstrates robust generalization to both nonresonant and higher multiplicity configurations, in the latter case outperforming models fit to that specific set of integrations. The model computes instability estimates up to 105 times faster than a numerical integrator, and unlike previous efforts provides confidence intervals on its predictions. Our inference model is publicly available in the SPOCK (https://github.com/dtamayo/spock) package, with training code open sourced (https://github.com/MilesCranmer/bnn_chaos_model).


Sign in / Sign up

Export Citation Format

Share Document