scholarly journals Probabilistic Prediction of Unknown Metabolic and Signal-Transduction Networks

Genetics ◽  
2001 ◽  
Vol 159 (3) ◽  
pp. 1291-1298 ◽  
Author(s):  
Shawn M Gomez ◽  
Shaw-Hwa Lo ◽  
Andrey Rzhetsky

Abstract Regulatory networks provide control over complex cell behavior in all kingdoms of life. Here we describe a statistical model, based on representing proteins as collections of domains or motifs, which predicts unknown molecular interactions within these biological networks. Using known protein-protein interactions of Saccharomyces cerevisiae as training data, we were able to predict the links within this network with only 7% false-negative and 10% false-positive error rates. We also use Markov chain Monte Carlo simulation for the prediction of networks with maximum probability under our model. This model can be applied across species, where interaction data from one (or several) species can be used to infer interactions in another. In addition, the model is extensible and can be analogously applied to other molecular data (e.g., DNA sequences).

2019 ◽  
Vol 19 (6) ◽  
pp. 413-425 ◽  
Author(s):  
Athanasios Alexiou ◽  
Stylianos Chatzichronis ◽  
Asma Perveen ◽  
Abdul Hafeez ◽  
Ghulam Md. Ashraf

Background:Latest studies reveal the importance of Protein-Protein interactions on physiologic functions and biological structures. Several stochastic and algorithmic methods have been published until now, for the modeling of the complex nature of the biological systems.Objective:Biological Networks computational modeling is still a challenging task. The formulation of the complex cellular interactions is a research field of great interest. In this review paper, several computational methods for the modeling of GRN and PPI are presented analytically.Methods:Several well-known GRN and PPI models are presented and discussed in this review study such as: Graphs representation, Boolean Networks, Generalized Logical Networks, Bayesian Networks, Relevance Networks, Graphical Gaussian models, Weight Matrices, Reverse Engineering Approach, Evolutionary Algorithms, Forward Modeling Approach, Deterministic models, Static models, Hybrid models, Stochastic models, Petri Nets, BioAmbients calculus and Differential Equations.Results:GRN and PPI methods have been already applied in various clinical processes with potential positive results, establishing promising diagnostic tools.Conclusion:In literature many stochastic algorithms are focused in the simulation, analysis and visualization of the various biological networks and their dynamics interactions, which are referred and described in depth in this review paper.


1990 ◽  
Vol 15 (1) ◽  
pp. 39-52 ◽  
Author(s):  
Huynh Huynh

False positive and false negative error rates are studied for competency testing where examinees are permitted to retake the test if they fail to pass. Formulae are provided for the beta-binomial and Rasch models, and estimates based on these two models are compared for several typical situations. Although Rasch estimates are expected to be more accurate than beta-binomial estimates, differences among them are found not to be substantial in a number of practical situations. Under relatively general conditions and when test retaking is permitted, the probability of making a false negative error is zero. Under the same situation, and given that an examinee is a true nonmaster, the conditional probability of making a false positive error for this examinee is one.


2020 ◽  
pp. jclinpath-2020-206726
Author(s):  
Cornelia Margaret Szecsei ◽  
Jon D Oxley

AimTo examine the effects of specialist reporting on error rates in prostate core biopsy diagnosis.MethodBiopsies were reported by eight specialist uropathologists over 3 years. New cancer diagnoses were double-reported and all biopsies were reviewed for the multidisciplinary team (MDT) meeting. Diagnostic alterations were recorded in supplementary reports and error rates were compared with a decade previously.Results2600 biopsies were reported. 64.1% contained adenocarcinoma, a 19.7% increase. The false-positive error rate had reduced from 0.4% to 0.06%. The false-negative error rate had increased from 1.5% to 1.8%, but represented fewer absolute errors due to increased cancer incidence.ConclusionsSpecialisation and double-reporting have reduced false-positive errors. MDT review of negative cores continues to identify a very low number of false-negative errors. Our data represents a ‘gold standard’ for prostate biopsy diagnostic error rates. Increased use of MRI-targeted biopsies may alter error rates and their future clinical significance.


2015 ◽  
Vol 36 (6) ◽  
pp. 3671 ◽  
Author(s):  
Gilberto Rodrigues Liska ◽  
Fortunato Silva de Menezes ◽  
Marcelo Angelo Cirillo ◽  
Flávio Meira Borém ◽  
Ricardo Miguel Cortez ◽  
...  

Automatic classification methods have been widely used in numerous situations and the boosting method has become known for use of a classification algorithm, which considers a set of training data and, from that set, constructs a classifier with reweighted versions of the training set. Given this characteristic, the aim of this study is to assess a sensory experiment related to acceptance tests with specialty coffees, with reference to both trained and untrained consumer groups. For the consumer group, four sensory characteristics were evaluated, such as aroma, body, sweetness, and final score, attributed to four types of specialty coffees. In order to obtain a classification rule that discriminates trained and untrained tasters, we used the conventional Fisher’s Linear Discriminant Analysis (LDA) and discriminant analysis via boosting algorithm (AdaBoost). The criteria used in the comparison of the two approaches were sensitivity, specificity, false positive rate, false negative rate, and accuracy of classification methods. Additionally, to evaluate the performance of the classifiers, the success rates and error rates were obtained by Monte Carlo simulation, considering 100 replicas of a random partition of 70% for the training set, and the remaining for the test set. It was concluded that the boosting method applied to discriminant analysis yielded a higher sensitivity rate in regard to the trained panel, at a value of 80.63% and, hence, reduction in the rate of false negatives, at 19.37%. Thus, the boosting method may be used as a means of improving the LDA classifier for discrimination of trained tasters.


2021 ◽  
Author(s):  
Brennan Klein ◽  
Erik Hoel ◽  
Anshuman Swain ◽  
Ross Griebenow ◽  
Michael Levin

Abstract The internal workings of biological systems are notoriously difficult to understand. Due to the prevalence of noise and degeneracy in evolved systems, in many cases the workings of everything from gene regulatory networks to protein–protein interactome networks remain black boxes. One consequence of this black-box nature is that it is unclear at which scale to analyze biological systems to best understand their function. We analyzed the protein interactomes of over 1800 species, containing in total 8 782 166 protein–protein interactions, at different scales. We show the emergence of higher order ‘macroscales’ in these interactomes and that these biological macroscales are associated with lower noise and degeneracy and therefore lower uncertainty. Moreover, the nodes in the interactomes that make up the macroscale are more resilient compared with nodes that do not participate in the macroscale. These effects are more pronounced in interactomes of eukaryota, as compared with prokaryota; these results hold even after sensitivity tests where we recalculate the emergent macroscales under network simulations where we add different edge weights to the interactomes. This points to plausible evolutionary adaptation for macroscales: biological networks evolve informative macroscales to gain benefits of both being uncertain at lower scales to boost their resilience, and also being ‘certain’ at higher scales to increase their effectiveness at information transmission. Our work explains some of the difficulty in understanding the workings of biological networks, since they are often most informative at a hidden higher scale, and demonstrates the tools to make these informative higher scales explicit.


2020 ◽  
Author(s):  
Silke D. Kühlwein ◽  
Nensi Ikonomi ◽  
Julian D. Schwab ◽  
Johann M. Kraus ◽  
K. Lenhard Rudolph ◽  
...  

AbstractBiological processes are rarely a consequence of single protein interactions but rather of complex regulatory networks. However, interaction graphs cannot adequately capture temporal changes. Among models that investigate dynamics, Boolean network models can approximate simple features of interaction graphs integrating also dynamics. Nevertheless, dynamic analyses are time-consuming and with growing number of nodes may become infeasible. Therefore, we set up a method to identify minimal sets of nodes able to determine network dynamics. This approach is able to depict dynamics without calculating exhaustively the complete network dynamics. Applying it to a variety of biological networks, we identified small sets of nodes sufficient to determine the dynamic behavior of the whole system. Further characterization of these sets showed that the majority of dynamic decision-makers were not static hubs. Our work suggests a paradigm shift unraveling a new class of nodes different from static hubs and able to determine network dynamics.


2019 ◽  
Author(s):  
Scott D. Blain ◽  
Julia Longenecker ◽  
Rachael Grazioplene ◽  
Bonnie Klimes-Dougan ◽  
Colin G. DeYoung

Positive symptoms of schizophrenia and its extended phenotype—often termed psychoticism or positive schizotypy—are characterized by the inclusion of novel, erroneous mental contents. One promising framework for explaining positive symptoms involves “apophenia,” conceptualized here as a disposition toward false positive errors. Apophenia and positive symptoms have shown relations to Openness to Experience, and all of these constructs involve tendencies toward pattern seeking. Nonetheless, few studies have investigated the relations between psychoticism and non-self-report indicators of apophenia, let alone the role of normal personality variation. The current research used structural equation models to test associations between psychoticism, openness, intelligence, and non-self-report indicators of apophenia comprising false positive error rates on a variety of computerized tasks. In Sample 1, 1193 participants completed digit identification, theory of mind, and emotion recognition tasks. In Sample 2, 195 participants completed auditory signal detection and semantic word association tasks. Openness and psychoticism were positively correlated. Self-reported psychoticism, openness, and their shared variance were positively associated with apophenia, as indexed by false positive error rates, whether or not intelligence was controlled for. Apophenia was not associated with other personality traits, and openness and psychoticism were not associated with false negative errors. Standardized regression paths from openness-psychoticism to apophenia were in the range of .61 to .75. Findings provide insights into the measurement of apophenia and its relation to personality and psychopathology. Apophenia and pattern seeking may be promising constructs for unifying openness with the psychosis spectrum and for providing an explanation of positive symptoms. Results are discussed in the context of possible adaptive characteristics of apophenia, as well as potential risk factors for the development of psychotic disorders.


2020 ◽  
Author(s):  
Léo P.M. Diaz ◽  
Michael P.H. Stumpf

AbstractNetwork inference is a notoriously challenging problem. Inferred networks are associated with high uncertainty and likely riddled with false positive and false negative interactions. Especially for biological networks we do not have good ways of judging the performance of inference methods against real networks, and instead we often rely solely on the performance against simulated data. Gaining confidence in networks inferred from real data nevertheless thus requires establishing reliable validation methods. Here, we argue that the expectation of mixing patterns in biological networks such as gene regulatory networks offers a reasonable starting point: interactions are more likely to occur between nodes with similar biological functions. We can quantify this behaviour using the assortativity coefficient, and here we show that the resulting heuristic, functional assortativity, offers a reliable and informative route for comparing different inference algorithms.


2020 ◽  
Author(s):  
Erik Hoel ◽  
Brennan Klein ◽  
Anshuman Swain ◽  
Ross Grebenow ◽  
Michael Levin

AbstractThe internal workings of biological systems are notoriously difficult to understand. Due to the prevalence of noise and degeneracy in evolved systems, in many cases the workings of everything from gene regulatory networks to protein-protein interactome networks remain black boxes. One consequence of this black-box nature is that it is unclear at which scale to analyze biological systems to best understand their function. We analyzed the protein interactomes of over 1800 species, containing in total 8,782,166 protein-protein interactions, at different scales. We demonstrate the emergence of higher order ‘macroscales’ in these interactomes and that these biological macroscales are associated with lower noise and degeneracy and therefore lower uncertainty. Moreover, the nodes in the interactomes that make up the macroscale are more resilient compared to nodes that do not participate in the macroscale. These effects are more pronounced in interactomes of Eukaryota, as compared to Prokaryota. This points to plausible evolutionary adaptation for macroscales: biological networks evolve informative macroscales to gain benefits of both being uncertain at lower scales to boost their resilience, and also being ‘certain’ at higher scales to increase their effectiveness at information transmission. Our work explains some of the difficulty in understanding the workings of biological networks, since they are often most informative at a hidden higher scale, and demonstrates the tools to make these informative higher scales explicit.


2009 ◽  
Vol 27 (4) ◽  
pp. 241-250
Author(s):  
Mark P. Widrlechner ◽  
Janette R. Thompson ◽  
Emily J. Kapler ◽  
Kristen Kordecki ◽  
Philip M. Dixon ◽  
...  

Abstract Accurate methods to predict the naturalization of non-native woody plants are key components of risk-management programs being considered by nursery and landscape professionals. The objective of this study was to evaluate four decision-tree models to predict naturalization (first tested in Iowa) on two new sets of data for non-native woody plants cultivated in the Chicago region. We identified life-history traits and native ranges for 193 species (52 known to naturalize and 141 not known to naturalize) in two study areas within the Chicago region. We used these datasets to test four models (one continental-scale and three regional-scale) as a form of external validation. Application of the continental-scale model resulted in classification rates of 72–76%, horticulturally limiting (false positive) error rates of 20–24%, and biologically significant (false negative) error rates of 5–6%. Two regional modifications to the continental model gave increased classification rates (85–93%) and generally lower horticulturally limiting error rates (16–22%), but similar biologically significant error rates (5–8%). A simpler method, the CART model developed from the Iowa data, resulted in lower classification rates (70–72%) and higher biologically significant error rates (8–10%), but, to its credit, it also had much lower horticulturally limiting error rates (5–10%). A combination of models to capture both high classification rates and low error rates will likely be the most effective until improved protocols based on multiple regional datasets can be developed and validated.


Sign in / Sign up

Export Citation Format

Share Document