scholarly journals Hierarchical Harmonization of Atom-Resolved Metabolic Re-actions Across Metabolic Databases

2021 ◽  
Author(s):  
Huan Jin ◽  
Hunter N.B. Moseley

Metabolic models have been proven to be useful tools in system biology and have been suc-cessfully applied to various research fields in a wide range of organisms. A relatively complete metabolic network is a prerequisite for deriving reliable metabolic models. The first step in con-structing metabolic network is to harmonize compounds and reactions across different metabolic databases. However, effectively integrating data from various sources still remains a big chal-lenge. Incomplete and inconsistent atomistic details in compound representations across data-bases is a very important limiting factor. Here, we optimized a subgraph isomorphism detection algorithm to validate generic compound pairs. Moreover, we defined a set of harmonization re-lationship types between compounds to deal with inconsistent chemical details while successfully capturing atom-level characteristics, enabling a more complete enabling compound harmoniza-tion across metabolic databases. In total, 15,704 compound pairs across KEGG (Kyoto Encyclo-pedia of Genes and Genomes) and MetaCyc databases were detected. Furthermore, utilizing the classification of compound pairs and EC (Enzyme Commission) numbers of reactions, we estab-lished hierarchical relationships between metabolic reactions, enabling the harmonization of 3,856 reaction pairs. In addition, we created and used atom-specific identifiers to evaluate the con-sistency of atom mappings within and between harmonized reactions, detecting some con-sistency issues between the reaction and compound descriptions in these metabolic databases.

Metabolites ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 431
Author(s):  
Huan Jin ◽  
Hunter N. B. Moseley

Metabolic models have been proven to be useful tools in system biology and have been successfully applied to various research fields in a wide range of organisms. A relatively complete metabolic network is a prerequisite for deriving reliable metabolic models. The first step in constructing metabolic network is to harmonize compounds and reactions across different metabolic databases. However, effectively integrating data from various sources still remains a big challenge. Incomplete and inconsistent atomistic details in compound representations across databases is a very important limiting factor. Here, we optimized a subgraph isomorphism detection algorithm to validate generic compound pairs. Moreover, we defined a set of harmonization relationship types between compounds to deal with inconsistent chemical details while successfully capturing atom-level characteristics, enabling a more complete enabling compound harmonization across metabolic databases. In total, 15,704 compound pairs across KEGG (Kyoto Encyclopedia of Genes and Genomes) and MetaCyc databases were detected. Furthermore, utilizing the classification of compound pairs and EC (Enzyme Commission) numbers of reactions, we established hierarchical relationships between metabolic reactions, enabling the harmonization of 3856 reaction pairs. In addition, we created and used atom-specific identifiers to evaluate the consistency of atom mappings within and between harmonized reactions, detecting some consistency issues between the reaction and compound descriptions in these metabolic databases.


2015 ◽  
Author(s):  
Kazuhiro Takemoto

The evolution of species habitat range is an important topic over a wide range of research fields. In higher organisms, habitat range evolution is generally associated with genetic events such as gene duplication. However, the specific factors that determine habitat variability remain unclear at higher levels of biological organization (e.g., biochemical networks). One widely accepted hypothesis developed from both theoretical and empirical analyses is that habitat variability promotes network modularity; however, this relationship has not yet been directly tested in higher organisms. Therefore, I investigated the relationship between habitat variability and metabolic network modularity using compound and enzymatic networks in flies and mammals. Contrary to expectation, there was no clear positive correlation between habitat variability and network modularity. As an exception, the network modularity increased with habitat variability in the enzymatic networks of flies. However, the observed association was likely an artifact, and the frequency of gene duplication appears to be the main factor contributing to network modularity. These findings raise the question of whether or not there is a general mechanism for habitat range expansion at a higher level (i.e., above the gene scale). This study suggests that the currently widely accepted hypothesis for habitat variability should be reconsidered.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. e17567-e17567
Author(s):  
Samira Masoudi ◽  
Sherif Mehralivand ◽  
Stephanie Harmon ◽  
Stephanie Walker ◽  
Peter A. Pinto ◽  
...  

e17567 Background: Patients diagnosed with prostate cancer undergo computed tomography (CT) for pretreatment staging to rule out bone metastases. However, detection and classification of bone lesions on CT is challenging and subject to inter-reader variability. We present a cascaded deep learning algorithm for automatic detection and classification of bone lesions on staging CT in patients diagnosed with prostate cancer. Methods: CT scans from 56 patients with histopathologically proven prostate cancer were included. An expert radiologist annotated the extent of individual bone lesions (N = 4217) and labelled all regions as either benign or malignant. All scans were anonymized and normalized at the patient-level prior to training. Our method can be described as a two-stage framework, 1) A detection algorithm: Inspired by Yolo-v3 detection method, we designed a network with a backbone of darknet-53 pretrained on Coco dataset and four final scaling blocks to compensate for wide range of lesion diameters, 2) A classification algorithm: we formed a binary classifier based on ResNet-50 pretrained using the ImageNet dataset. We used a train/validation split equal to 90%/10% for this study. To facilitate the learning process, horizontal flipping, relative zooming and mean weighted averaging were used for data augmentation in stage 1. Instead, the classification algorithm took advantage of synthesized patches generated by Deep Convolutional Generative Adversarial Network (DC-GAN) for augmentation. Results: We could achieve a real-time (~120ms per slice) performance on our validation set with a median penalty of 0.3(0.02-0.78) false positives per true positive within each patient. Overall performance of our detection algorithm was 81% sensitivity and 86% positive predictive value. In stage 2, we obtained an accuracy of 89% for correct classification of benign from malignant bone lesions with no augmentation which was improved to 91% when we incorporated the augmented data for training. Conclusions: Our 2-stage algorithm sequentially detects and classifies bone lesions on CT of prostate cancer patients with a significant performance. To further improve our results and for generalizability we are accruing more data from different centers. Eventually, with greater dataset, both algorithms will be cascaded and trained as a whole unit to become one single tool for fully automatic detection and classification which serves as an aid for radiologists who read the staging CTs.


2019 ◽  
Vol 28 (3) ◽  
pp. 1257-1267 ◽  
Author(s):  
Priya Kucheria ◽  
McKay Moore Sohlberg ◽  
Jason Prideaux ◽  
Stephen Fickas

PurposeAn important predictor of postsecondary academic success is an individual's reading comprehension skills. Postsecondary readers apply a wide range of behavioral strategies to process text for learning purposes. Currently, no tools exist to detect a reader's use of strategies. The primary aim of this study was to develop Read, Understand, Learn, & Excel, an automated tool designed to detect reading strategy use and explore its accuracy in detecting strategies when students read digital, expository text.MethodAn iterative design was used to develop the computer algorithm for detecting 9 reading strategies. Twelve undergraduate students read 2 expository texts that were equated for length and complexity. A human observer documented the strategies employed by each reader, whereas the computer used digital sequences to detect the same strategies. Data were then coded and analyzed to determine agreement between the 2 sources of strategy detection (i.e., the computer and the observer).ResultsAgreement between the computer- and human-coded strategies was 75% or higher for 6 out of the 9 strategies. Only 3 out of the 9 strategies–previewing content, evaluating amount of remaining text, and periodic review and/or iterative summarizing–had less than 60% agreement.ConclusionRead, Understand, Learn, & Excel provides proof of concept that a reader's approach to engaging with academic text can be objectively and automatically captured. Clinical implications and suggestions to improve the sensitivity of the code are discussed.Supplemental Materialhttps://doi.org/10.23641/asha.8204786


Author(s):  
Grigorii I. Nesmeyanov ◽  

The article formulates main questions related to the concept of context. The issue of context is considered as a current-day interdisciplinary field of research. There are many definitions of context in dictionaries and in various humanities (including scientific disciplines). In connection with that issue various methodological approaches arise in the humanities, which can be designated by the umbrella term “contextual”. By the example of one of such approaches to the sociological poetics of the “Bakhtin’s circle”, the author substantiates the possibility of creating an interdisciplinary classification of contextual approaches. That classification may include scientific developments of different years and research fields, including: philosophical hermeneutics, a number of approaches to the Russian and foreign literary theory (M.M. Bakhtin, Yu.M. Lotman, B.M. Eichenbaum, F. Moretti, A. Compagnon, etc.), intellectual history, discourse analysis, etc.


2021 ◽  
pp. 104973232199379
Author(s):  
Olaug S. Lian ◽  
Sarah Nettleton ◽  
Åge Wifstad ◽  
Christopher Dowrick

In this article, we qualitatively explore the manner and style in which medical encounters between patients and general practitioners (GPs) are mutually conducted, as exhibited in situ in 10 consultations sourced from the One in a Million: Primary Care Consultations Archive in England. Our main objectives are to identify interactional modes, to develop a classification of these modes, and to uncover how modes emerge and shift both within and between consultations. Deploying an interactional perspective and a thematic and narrative analysis of consultation transcripts, we identified five distinctive interactional modes: question and answer (Q&A) mode, lecture mode, probabilistic mode, competition mode, and narrative mode. Most modes are GP-led. Mode shifts within consultations generally map on to the chronology of the medical encounter. Patient-led narrative modes are initiated by patients themselves, which demonstrates agency. Our classification of modes derives from complete naturally occurring consultations, covering a wide range of symptoms, and may have general applicability.


Computers ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 82
Author(s):  
Ahmad O. Aseeri

Deep Learning-based methods have emerged to be one of the most effective and practical solutions in a wide range of medical problems, including the diagnosis of cardiac arrhythmias. A critical step to a precocious diagnosis in many heart dysfunctions diseases starts with the accurate detection and classification of cardiac arrhythmias, which can be achieved via electrocardiograms (ECGs). Motivated by the desire to enhance conventional clinical methods in diagnosing cardiac arrhythmias, we introduce an uncertainty-aware deep learning-based predictive model design for accurate large-scale classification of cardiac arrhythmias successfully trained and evaluated using three benchmark medical datasets. In addition, considering that the quantification of uncertainty estimates is vital for clinical decision-making, our method incorporates a probabilistic approach to capture the model’s uncertainty using a Bayesian-based approximation method without introducing additional parameters or significant changes to the network’s architecture. Although many arrhythmias classification solutions with various ECG feature engineering techniques have been reported in the literature, the introduced AI-based probabilistic-enabled method in this paper outperforms the results of existing methods in outstanding multiclass classification results that manifest F1 scores of 98.62% and 96.73% with (MIT-BIH) dataset of 20 annotations, and 99.23% and 96.94% with (INCART) dataset of eight annotations, and 97.25% and 96.73% with (BIDMC) dataset of six annotations, for the deep ensemble and probabilistic mode, respectively. We demonstrate our method’s high-performing and statistical reliability results in numerical experiments on the language modeling using the gating mechanism of Recurrent Neural Networks.


Metabolites ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 113
Author(s):  
Julia Koblitz ◽  
Sabine Will ◽  
S. Riemer ◽  
Thomas Ulas ◽  
Meina Neumann-Schaal ◽  
...  

Genome-scale metabolic models are of high interest in a number of different research fields. Flux balance analysis (FBA) and other mathematical methods allow the prediction of the steady-state behavior of metabolic networks under different environmental conditions. However, many existing applications for flux optimizations do not provide a metabolite-centric view on fluxes. Metano is a standalone, open-source toolbox for the analysis and refinement of metabolic models. While flux distributions in metabolic networks are predominantly analyzed from a reaction-centric point of view, the Metano methods of split-ratio analysis and metabolite flux minimization also allow a metabolite-centric view on flux distributions. In addition, we present MMTB (Metano Modeling Toolbox), a web-based toolbox for metabolic modeling including a user-friendly interface to Metano methods. MMTB assists during bottom-up construction of metabolic models by integrating reaction and enzymatic annotation data from different databases. Furthermore, MMTB is especially designed for non-experienced users by providing an intuitive interface to the most commonly used modeling methods and offering novel visualizations. Additionally, MMTB allows users to upload their models, which can in turn be explored and analyzed by the community. We introduce MMTB by two use cases, involving a published model of Corynebacterium glutamicum and a newly created model of Phaeobacter inhibens.


Animals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 673
Author(s):  
Alexandra L. Whittaker ◽  
Yifan Liu ◽  
Timothy H. Barker

The Mouse Grimace Scale (MGS) was developed 10 years ago as a method for assessing pain through the characterisation of changes in five facial features or action units. The strength of the technique is that it is proposed to be a measure of spontaneous or non-evoked pain. The time is opportune to map all of the research into the MGS, with a particular focus on the methods used and the technique’s utility across a range of mouse models. A comprehensive scoping review of the academic literature was performed. A total of 48 articles met our inclusion criteria and were included in this review. The MGS has been employed mainly in the evaluation of acute pain, particularly in the pain and neuroscience research fields. There has, however, been use of the technique in a wide range of fields, and based on limited study it does appear to have utility for pain assessment across a spectrum of animal models. Use of the method allows the detection of pain of a longer duration, up to a month post initial insult. There has been less use of the technique using real-time methods and this is an area in need of further research.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sakthi Kumar Arul Prakash ◽  
Conrad Tucker

AbstractThis work investigates the ability to classify misinformation in online social media networks in a manner that avoids the need for ground truth labels. Rather than approach the classification problem as a task for humans or machine learning algorithms, this work leverages user–user and user–media (i.e.,media likes) interactions to infer the type of information (fake vs. authentic) being spread, without needing to know the actual details of the information itself. To study the inception and evolution of user–user and user–media interactions over time, we create an experimental platform that mimics the functionality of real-world social media networks. We develop a graphical model that considers the evolution of this network topology to model the uncertainty (entropy) propagation when fake and authentic media disseminates across the network. The creation of a real-world social media network enables a wide range of hypotheses to be tested pertaining to users, their interactions with other users, and with media content. The discovery that the entropy of user–user and user–media interactions approximate fake and authentic media likes, enables us to classify fake media in an unsupervised learning manner.


Sign in / Sign up

Export Citation Format

Share Document