Pathway Biology Approach to Medicine

Author(s):  
Peter Ghazal

An increasing number of biological experiments and more recently clinical based studies are being conducted using large-scale genomic, proteomic and metabolomic techniques which generate high-dimensional data sets. Such approaches require the adoption of both hypothesis and data driven strategies in the analysis and interpretation of results. In particular, data-mining and pattern recognition methodologies have proven particularly useful in this field. The increasing amount of information available from high-throughput experiments has initiated a move from focussed, single gene and protein investigations abstract Systems biology provides a new approach to studying, analyzing, and ultimately controlling biological processes. Biological pathways represent a key sub-system level of organization that seamlessly perform complex information processing and control tasks. The aim of pathway biology is to map and understand the cause-effect relationships and dependencies associated with the complex interactions of biological networks and systems. Drugs that therapeutically modulate the biological processes of disease are often developed with limited knowledge of the underlying complexity of their specific targets. Considering the combinatorial complexity from the outset might help identify potential causal relationships that could lead to a better understanding of the drug-target biology as well as provide new biomarkers for modelling diagnosis and treatment response in patients. This chapter discusses the use of a pathway biology approach to modelling biological processes and providing a new framework for experimental medicine in the post-genomic era.

2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Tuan-Minh Nguyen ◽  
Adib Shafi ◽  
Tin Nguyen ◽  
Sorin Draghici

Abstract Background Many high-throughput experiments compare two phenotypes such as disease vs. healthy, with the goal of understanding the underlying biological phenomena characterizing the given phenotype. Because of the importance of this type of analysis, more than 70 pathway analysis methods have been proposed so far. These can be categorized into two main categories: non-topology-based (non-TB) and topology-based (TB). Although some review papers discuss this topic from different aspects, there is no systematic, large-scale assessment of such methods. Furthermore, the majority of the pathway analysis approaches rely on the assumption of uniformity of p values under the null hypothesis, which is often not true. Results This article presents the most comprehensive comparative study on pathway analysis methods available to date. We compare the actual performance of 13 widely used pathway analysis methods in over 1085 analyses. These comparisons were performed using 2601 samples from 75 human disease data sets and 121 samples from 11 knockout mouse data sets. In addition, we investigate the extent to which each method is biased under the null hypothesis. Together, these data and results constitute a reliable benchmark against which future pathway analysis methods could and should be tested. Conclusion Overall, the result shows that no method is perfect. In general, TB methods appear to perform better than non-TB methods. This is somewhat expected since the TB methods take into consideration the structure of the pathway which is meant to describe the underlying phenomena. We also discover that most, if not all, listed approaches are biased and can produce skewed results under the null.


2009 ◽  
Vol 37 (2) ◽  
pp. 460-465 ◽  
Author(s):  
Deborah G. Maddocks ◽  
Medhat S. Alberry ◽  
George Attilakos ◽  
Tracey E. Madgett ◽  
Kin Choi ◽  
...  

After the revolutionary detection of ffDNA (free fetal DNA) in maternal circulation by real-time PCR in 1997 and advances in molecular techniques, NIPD (non-invasive prenatal diagnosis) is now a clinical reality. Non-invasive diagnosis using ffDNA has been implemented, allowing the detection of paternally inherited alleles, sex-linked conditions and some single-gene disorders and is a viable indicator of predisposition to certain obstetric complications [e.g. PET (pre-eclampsia)]. To date, the major use of ffDNA genotyping in the clinic has been for the non-invasive detection of the pregnancies that are at risk of HDFN (haemolytic disease of the fetus and newborn). This has seen numerous clinical services arising across Europe and many large-scale NIPD genotyping studies taking place using maternal plasma. Because of the interest in performing NIPD and the speed at which the research in this area was developing, the SAFE (Special Non-Invasive Advances in Fetal and Neonatal Evaluation) NoE (Network of Excellence) was founded. The SAFE project was set up to implement routine, cost-effective NIPD and neonatal screening through the creation of long-term partnerships within and beyond the European Community and has played a major role in the standardization of non-invasive RHD genotyping. Other research using ffDNA has focused on the amount of ffDNA present in the maternal circulation, with a view to pre-empting various complications of pregnancy. One of the key areas of interest in the non-invasive arena is the prenatal detection of aneuploid pregnancies, particularly Down's syndrome. Owing to the high maternal DNA background, detection of ffDNA from maternal plasma is very difficult; consequently, research in this area is now more focused on ffRNA to produce new biomarkers.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Md. Altaf-Ul-Amin ◽  
Tetsuo Katsuragi ◽  
Tetsuo Sato ◽  
Shigehiko Kanaya

Recently, biology has become a data intensive science because of huge data sets produced by high throughput molecular biological experiments in diverse areas including the fields of genomics, transcriptomics, proteomics, and metabolomics. These huge datasets have paved the way for system-level analysis of the processes and subprocesses of the cell. For system-level understanding, initially the elements of a system are connected based on their mutual relations and a network is formed. Among omics researchers, construction and analysis of biological networks have become highly popular. In this review, we briefly discuss both the biological background and topological properties of major types of omics networks to facilitate a comprehensive understanding and to conceptualize the foundation of network biology.


2012 ◽  
Vol 279 (1745) ◽  
pp. 4156-4164 ◽  
Author(s):  
Hsuan-Chao Chiu ◽  
Christopher J. Marx ◽  
Daniel Segrè

Epistasis between mutations in two genes is thought to reflect an interdependence of their functions. While sometimes epistasis is predictable using mechanistic models, its roots seem, in general, hidden in the complex architecture of biological networks. Here, we ask how epistasis can be quantified based on the mathematical dependence of a system-level trait (e.g. fitness) on lower-level traits (e.g. molecular or cellular properties). We first focus on a model in which fitness is the difference between a benefit and a cost trait, both pleiotropically affected by mutations. We show that despite its simplicity, this model can be used to analytically predict certain properties of the ensuing distribution of epistasis, such as a global negative bias, resulting in antagonism between beneficial mutations, and synergism between deleterious ones. We next extend these ideas to derive a general expression for epistasis given an arbitrary functional dependence of fitness on other traits. This expression demonstrates how epistasis relative to fitness can emerge despite the absence of epistasis relative to lower level traits, leading to a formalization of the concept of independence between biological processes. Our results suggest that epistasis may be largely shaped by the pervasiveness of pleiotropic effects and modular organization in biological networks.


2015 ◽  
Author(s):  
Lihua Zou

Despite large-scale efforts to systematically map the cancer genome, little is known about how the interplay of genetic and epigenetic alternations shapes the architecture of the transcriptome of human cancer. With the goal of constructing a system-level view of the deregulated pathways in cancer cells, we systematically investigated the functional organization of the transcriptomes of 10 tumor types using data sets generated by The Cancer Genome Atlas project (TCGA). Our analysis indicates that the human cancer transcriptome is organized into well-conserved modules of co-expressed genes. In particular, our analysis identified a set of conserved gene modules with distinct cancer hallmark themes involving cell cycle regulation, angiogenesis, innate and adaptive immune response, differentiation, metabolism and regulation of protein phosphorylation. Our analysis provided global views of convergent transcriptome architecture of human cancer. The result of our analysis can serve as a foundation to link diverse genomic alternations to common transcriptomic features in human cancer.


2011 ◽  
Vol 11 (2-3) ◽  
pp. 323-360 ◽  
Author(s):  
MARTIN GEBSER ◽  
TORSTEN SCHAUB ◽  
SVEN THIELE ◽  
PHILIPPE VEBER

AbstractWe introduce an approach to detecting inconsistencies in large biological networks by using answer set programming. To this end, we build upon a recently proposed notion of consistency between biochemical/genetic reactions and high-throughput profiles of cell activity. We then present an approach based on answer set programming to check the consistency of large-scale data sets. Moreover, we extend this methodology to provide explanations for inconsistencies by determining minimal representations of conflicts. In practice, this can be used to identify unreliable data or to indicate missing reactions.


2017 ◽  
Author(s):  
Joseph L. Natale ◽  
David Hofmann ◽  
Damián G. Hernández ◽  
Ilya Nemenman

Much of contemporary systems biology owes its success to the abstraction of anetwork, the idea that diverse kinds of molecular, cellular, and organismal species and interactions can be modeled as relational nodes and edges in a graph of dependencies. Since the advent of high-throughput data acquisition technologies in fields such as genomics, metabolomics, and neuroscience, the automated inference and reconstruction of such interaction networks directly from large sets of activation data, commonly known as reverse-engineering, has become a routine procedure. Whereas early attempts at network reverse-engineering focused predominantly on producing maps of system architectures with minimal predictive modeling, reconstructions now play instrumental roles in answering questions about the statistics and dynamics of the underlying systems they represent. Many of these predictions have clinical relevance, suggesting novel paradigms for drug discovery and disease treatment. While other reviews focus predominantly on the details and effectiveness of individual network inference algorithms, here we examine the emerging field as a whole. We first summarize several key application areas in which inferred networks have made successful predictions. We then outline the two major classes of reverse-engineering methodologies, emphasizing that the type of prediction that one aims to make dictates the algorithms one should employ. We conclude by discussing whether recent breakthroughs justify the computational costs of large-scale reverse-engineering sufficiently to admit it as a mainstay in the quantitative analysis of living systems.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Dragan Maric ◽  
Jahandar Jahanipour ◽  
Xiaoyang Rebecca Li ◽  
Aditi Singh ◽  
Aryan Mobiny ◽  
...  

AbstractMapping biological processes in brain tissues requires piecing together numerous histological observations of multiple tissue samples. We present a direct method that generates readouts for a comprehensive panel of biomarkers from serial whole-brain slices, characterizing all major brain cell types, at scales ranging from subcellular compartments, individual cells, local multi-cellular niches, to whole-brain regions from each slice. We use iterative cycles of optimized 10-plex immunostaining with 10-color epifluorescence imaging to accumulate highly enriched image datasets from individual whole-brain slices, from which seamless signal-corrected mosaics are reconstructed. Specific fluorescent signals of interest are isolated computationally, rejecting autofluorescence, imaging noise, cross-channel bleed-through, and cross-labeling. Reliable large-scale cell detection and segmentation are achieved using deep neural networks. Cell phenotyping is performed by analyzing unique biomarker combinations over appropriate subcellular compartments. This approach can accelerate pre-clinical drug evaluation and system-level brain histology studies by simultaneously profiling multiple biological processes in their native anatomical context.


Author(s):  
Andrew Reid ◽  
Julie Ballantyne

In an ideal world, assessment should be synonymous with effective learning and reflect the intricacies of the subject area. It should also be aligned with the ideals of education: to provide equitable opportunities for all students to achieve and to allow both appropriate differentiation for varied contexts and students and comparability across various contexts and students. This challenge is made more difficult in circumstances in which the contexts are highly heterogeneous, for example in the state of Queensland, Australia. Assessment in music challenges schooling systems in unique ways because teaching and learning in music are often naturally differentiated and diverse, yet assessment often calls for standardization. While each student and teacher has individual, evolving musical pathways in life, the syllabus and the system require consistency and uniformity. The challenge, then, is to provide diverse, equitable, and quality opportunities for all children to learn and achieve to the best of their abilities. This chapter discusses the designing and implementation of large-scale curriculum as experienced in secondary schools in Queensland, Australia. The experiences detailed explore the possibilities offered through externally moderated school-based assessment. Also discussed is the centrality of system-level clarity of purpose, principles and processes, and the provision of supportive networks and mechanisms to foster autonomy for a diverse range of music educators and contexts. Implications for education systems that desire diversity, equity, and quality are discussed, and the conclusion provokes further conceptualization and action on behalf of students, teachers, and the subject area of music.


Sign in / Sign up

Export Citation Format

Share Document