scholarly journals Findings from the Section on Bioinformatics and Translational Informatics

2017 ◽  
Vol 26 (01) ◽  
pp. 188-192 ◽  
Author(s):  
H. Dauchel ◽  
T. Lecroq

Summary Objective: To summarize excellent current research and propose a selection of best papers published in 2016 in the field of Bioinformatics and Translational Informatics with applications in the health domain and clinical care. Methods: We provide a synopsis of the articles selected for the IMIA Yearbook 2017, from which we attempt to derive a synthetic overview of current and future activities in the field. As in 2016, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section coverage. Each section editor evaluated separately the set of 951 articles returned and evaluation results were merged for retaining 15 candidate best papers for peer-review. Results: The selection and evaluation process of papers published in the Bioinformatics and Translational Informatics field yielded four excellent articles focusing this year on the secondary use and massive integration of multi-omics data for cancer genomics and non-cancer complex diseases. Papers present methods to study the functional impact of genetic variations, either at the level of the transcription or at the levels of pathway and network. Conclusions: Current research activities in Bioinformatics and Translational Informatics with applications in the health domain continue to explore new algorithms and statistical models to manage, integrate, and interpret large-scale genomic datasets. As addressed by some of the selected papers, future trends would include the question of the international collaborative sharing of clinical and omics data, and the implementation of intelligent systems to enhance routine medical genomics.

2017 ◽  
Vol 26 (01) ◽  
pp. 188-191
Author(s):  
H. Dauchel ◽  
T. Lecroq

Summary Objective: To summarize excellent current research and propose a selection of best papers published in 2016 in the field of Bioinformatics and Translational Informatics with applications in the health domain and clinical care. Methods: We provide a synopsis of the articles selected for the IMIA Yearbook 2017, from which we attempt to derive a synthetic overview of current and future activities in the field. As in 2016, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section coverage. Each section editor evaluated separately the set of 951 articles returned and evaluation results were merged for retaining 15 candidate best papers for peer-review. Results: The selection and evaluation process of papers published in the Bioinformatics and Translational Informatics field yielded four excellent articles focusing this year on the secondary use and massive integration of multi-omics data for cancer genomics and non-cancer complex diseases. Papers present methods to study the functional impact of genetic variations, either at the level of the transcription or at the levels of pathway and network. Conclusions: Current research activities in Bioinformatics and Translational Informatics with applications in the health domain continue to explore new algorithms and statistical models to manage, integrate, and interpret large-scale genomic datasets. As addressed by some of the selected papers, future trends would include the question of the international collaborative sharing of clinical and omics data, and the implementation of intelligent systems to enhance routine medical genomics.


2016 ◽  
Vol 25 (01) ◽  
pp. 207-210
Author(s):  
T. Lecroq ◽  
H. Dauchel ◽  

Summary Objectives : To summarize excellent current research and propose a selection of best papers published in 2015 in the field of Bioinformatics and Translational Informatics with application in the health domain and clinical care. Method : We provide a synopsis of the articles selected for the IMIA Yearbook 2016, from which we attempt to derive a synthetic overview of current and future activities in the field. As last year, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor has evaluated separately the set of 1,566 articles and the evaluation results were merged for retaining 14 articles for peer-review. Results : The selection and evaluation process of this Yearbook's section on Bioinformatics and Translational Informatics yielded four excellent articles focusing this year on data management of large-scale datasets and genomic medicine that are mainly new method-based papers. Three articles explore the high potential of the re-analysis of previously collected data, here from The Cancer Genome Atlas project (TCGA) and one article presents an original analysis of genomic data from sub-Saharan Africa populations. Conclusions : The current research activities in Bioinformatics and Translational Informatics with application in the health domain continues to explore new algorithms and statistical models to manage and interpret large-scale genomic datasets. From population wide genome sequencing for cataloging genomic variants to the comprehension of functional impact on pathways and molecular interactions regarding a given pathology, making sense of large genomic data requires a necessary effort to address the issue of clinical translation for precise diagnostic and personalized medicine.


2015 ◽  
Vol 24 (01) ◽  
pp. 170-173 ◽  
Author(s):  
T. Lecroq ◽  
L. F. Soualmia ◽  

Summary Objectives: To summarize excellent current research in the field of Bioinformatics and Translational Informatics with application in the health domain and clinical care.Method: We provide a synopsis of the articles selected for the IMIA Yearbook 2015, from which we attempt to derive a synthetic overview of current and future activities in the field. As last year, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor has evaluated separately the set of 1,594 articles and the evaluation results were merged for retaining 15 articles for peer-review. Results: The selection and evaluation process of this Yearbook’s section on Bioinformatics and Translational Informatics yielded four excellent articles regarding data management and genome medicine that are mainly tool-based papers. In the first article, the authors present PPISURV a tool for uncovering the role of specific genes in cancer survival outcome. The second article describes the classifier PredictSNP which combines six performing tools for predicting disease-related mutations. In the third article, by presenting a high-coverage map of the human proteome using high resolution mass spectrometry, the authors highlight the need for using mass spectrometry to complement genome annotation. The fourth article is also related to patient survival and decision support. The authors present datamining methods of large-scale datasets of past transplants. The objective is to identify chances of survival. Conclusions: The current research activities still attest the continuous convergence of Bioinformatics and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care. Indeed, there is a need for powerful tools for managing and interpreting complex, large-scale genomic and biological datasets, but also a need for user-friendly tools developed for the clinicians in their daily practice. All the recent research and development efforts contribute to the challenge of impacting clinically the obtained results towards a personalized medicine.


2014 ◽  
Vol 23 (01) ◽  
pp. 212-214 ◽  
Author(s):  
L. F. Soualmia ◽  
T. Lecroq ◽  

Summary Objective:To summarize excellent current research in the field of Bioinformatics and Translational Informatics with application in the health domain. Method: We provide a synopsis of the articles selected for the IMIA Yearbook 2014, from which we attempt to derive a synthetic overview of current and future activities in the field. A first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor evaluated independently the set of 1,851 articles and 15 articles were retained for peer-review. Results: The selection and evaluation process of this Yearbook’s section on Bioinformatics and Translational Informatics yielded three excellent articles regarding data management and genome medicine. In the first article, the authors present VEST (Variant Effect Scoring Tool) which is a supervised machine learning tool for prioritizing variants found in exome sequencing projects that are more likely involved in human Mendelian diseases. In the second article, the authors show how to infer surnames of male individuals by crossing anonymous publicly available genomic data from the Y chromosome and public genealogy data banks. The third article presents a statistical framework called iCluster+ that can perform pattern discovery in integrated cancer genomic data. This framework was able to determine different tumor subtypes in colon cancer. Conclusions: The current research activities still attest the continuous convergence of Bioinformatics and Medical Informatics, with a focus this year on large-scale biological, genomic, and Electronic Health Records data. Indeed, there is a need for powerful tools for managing and interpreting complex data, but also a need for user-friendly tools developed for the clinicians in their daily practice. All the recent research and development efforts are contributing to the challenge of impacting clinically the results and even going towards a personalized medicine in the near future.


2016 ◽  
Vol 25 (01) ◽  
pp. 184-187
Author(s):  
J. Charlet ◽  
L. F. Soualmia ◽  

Summary Objectives: To summarize excellent current research in the field of Knowledge Representation and Management (KRM) within the health and medical care domain. Method: We provide a synopsis of the 2016 IMIA selected articles as well as a related synthetic overview of the current and future field activities. A first step of the selection was performed through MEDLINE querying with a list of MeSH descriptors completed by a list of terms adapted to the KRM section. The second step of the selection was completed by the two section editors who separately evaluated the set of 1,432 articles. The third step of the selection consisted of a collective work that merged the evaluation results to retain 15 articles for peer-review. Results: The selection and evaluation process of this Yearbook’s section on Knowledge Representation and Management has yielded four excellent and interesting articles regarding semantic interoperability for health care by gathering heterogeneous sources (knowledge and data) and auditing ontologies. In the first article, the authors present a solution based on standards and Semantic Web technologies to access distributed and heterogeneous datasets in the domain of breast cancer clinical trials. The second article describes a knowledge-based recommendation system that relies on ontologies and Semantic Web rules in the context of chronic diseases dietary. The third article is related to concept-recognition and text-mining to derive common human diseases model and a phenotypic network of common diseases. In the fourth article, the authors highlight the need for auditing the SNOMED CT. They propose to use a crowd-based method for ontology engineering. Conclusions: The current research activities further illustrate the continuous convergence of Knowledge Representation and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care by proposing solutions to cope with the problem of semantic interoperability. Indeed, there is a need for powerful tools able to manage and interpret complex, large-scale and distributed datasets and knowledge bases, but also a need for user-friendly tools developed for the clinicians in their daily practice.


2013 ◽  
Vol 22 (01) ◽  
pp. 175-177 ◽  
Author(s):  
L. F. Soualmia ◽  
T. Lecroq

Summary Objectives: To summarize excellent current research in the field of Bioinformatics and Translational Informatics with application in the health domain and evidence-based medicine. Method: We provide a synopsis of the articles selected for the IMIA Yearbook 2013, from which we attempt to derive a synthetic overview of current and future activities in the field. Three steps of selection were performed by querying PubMed and Web of Science. A first set of 5,549 articles was refined into a second set of 1,272 articles from which 15 articles were retained for peer-review. Results: The selection and evaluation process of this Yearbook's section on Bioinformatics and Translational Informatics yielded four excellent articles regarding the Human Genome and Medicine. Exploiting genomic data depends on having the appropriate reference annotation available. In the first article, the goal of the GENCODE Consortium is to produce and publish The GENCODE human reference gene set. As a result it is composed by merged manual and automatic annotations, which are frequently updated from public experimental databases. The quality of genome sequencing is platform-dependant. In the second article, a generic database independent from the sequencing technologies, Huvariome, can help to identify errors and inconsistencies in sequencing. To understand complex diseases of patients it will be of great importance to detect rare gene variants. This is the aim of the third study. Finally, in the last article, the plasma's DNA of healthy individual and patients suffering from cancer is compared. Conclusions: The current research activities attest to the continuous convergence of Bioinformatics and Medical Informatics for clinical practice. For instance, a direct use of high throughput sequencing technologies for patients could aid the diagnosis of complex diseases (such as cancer) without invasive surgery (such as biopsy) but only with blood analysis. However, ongoing genomic tests will generate massive amounts of data and will imply new trends in the near future: “Big Data” and smart health management.


2021 ◽  
Vol 12 ◽  
Author(s):  
Eleanor G. Seaby ◽  
Heidi L. Rehm ◽  
Anne O’Donnell-Luria

Rare genetic disorders, while individually rare, are collectively common. They represent some of the most severe disorders affecting patients worldwide with significant morbidity and mortality. Over the last decade, advances in genomic methods have significantly uplifted diagnostic rates for patients and facilitated novel and targeted therapies. However, many patients with rare genetic disorders still remain undiagnosed as the genetic etiology of only a proportion of Mendelian conditions has been discovered to date. This article explores existing strategies to identify novel Mendelian genes and how these discoveries impact clinical care and therapeutics. We discuss the importance of data sharing, phenotype-driven approaches, patient-led approaches, utilization of large-scale genomic sequencing projects, constraint-based methods, integration of multi-omics data, and gene-to-patient methods. We further consider the health economic advantages of novel gene discovery and speculate on potential future methods for improved clinical outcomes.


2020 ◽  
Vol 21 (4) ◽  
pp. 1374
Author(s):  
Pascal Steffen ◽  
Jemma Wu ◽  
Shubhang Hariharan ◽  
Hannah Voss ◽  
Vijay Raghunath ◽  
...  

Proteomics and genomics discovery experiments generate increasingly large result tables, necessitating more researcher time to convert the biological data into new knowledge. Literature review is an important step in this process and can be tedious for large scale experiments. An informed and strategic decision about which biomolecule targets should be pursued for follow-up experiments thus remains a considerable challenge. To streamline and formalise this process of literature retrieval and analysis of discovery based ‘omics data and as a decision-facilitating support tool for follow-up experiments we present OmixLitMiner, a package written in the computational language R. The tool automates the retrieval of literature from PubMed based on UniProt protein identifiers, gene names and their synonyms, combined with user defined contextual keyword search (i.e., gene ontology based). The search strategy is programmed to allow either strict or more lenient literature retrieval and the outputs are assigned to three categories describing how well characterized a regulated gene or protein is. The category helps to meet a decision, regarding which gene/protein follow-up experiments may be performed for gaining new knowledge and to exclude following already known biomarkers. We demonstrate the tool’s usefulness in this retrospective study assessing three cancer proteomics and one cancer genomics publication. Using the tool, we were able to corroborate most of the decisions in these papers as well as detect additional biomolecule leads that may be valuable for future research.


Impact ◽  
2019 ◽  
Vol 2019 (10) ◽  
pp. 44-46
Author(s):  
Masato Edahiro ◽  
Masaki Gondo

The pace of technology's advancements is ever-increasing and intelligent systems, such as those found in robots and vehicles, have become larger and more complex. These intelligent systems have a heterogeneous structure, comprising a mixture of modules such as artificial intelligence (AI) and powertrain control modules that facilitate large-scale numerical calculation and real-time periodic processing functions. Information technology expert Professor Masato Edahiro, from the Graduate School of Informatics at the Nagoya University in Japan, explains that concurrent advances in semiconductor research have led to the miniaturisation of semiconductors, allowing a greater number of processors to be mounted on a single chip, increasing potential processing power. 'In addition to general-purpose processors such as CPUs, a mixture of multiple types of accelerators such as GPGPU and FPGA has evolved, producing a more complex and heterogeneous computer architecture,' he says. Edahiro and his partners have been working on the eMBP, a model-based parallelizer (MBP) that offers a mapping system as an efficient way of automatically generating parallel code for multi- and many-core systems. This ensures that once the hardware description is written, eMBP can bridge the gap between software and hardware to ensure that not only is an efficient ecosystem achieved for hardware vendors, but the need for different software vendors to adapt code for their particular platforms is also eliminated.


Author(s):  
Irina Gaus ◽  
Klaus Wieczorek ◽  
Juan Carlos Mayor ◽  
Thomas Trick ◽  
Jose´-Luis Garcia` Sin˜eriz ◽  
...  

The evolution of the engineered barrier system (EBS) of geological repositories for radioactive waste has been the subject of many research programmes during the last decade. The emphasis of the research activities was on the elaboration of a detailed understanding of the complex thermo-hydro-mechanical-chemical processes, which are expected to evolve in the early post closure period in the near field. It is important to understand the coupled THM-C processes and their evolution occurring in the EBS during the early post-closure phase so it can be confirmed that the safety functions will be fulfilled. Especially, it needs to be ensured that interactions during the resaturation phase (heat pulse, gas generation, non-uniform water uptake from the host rock) do not affect the performance of the EBS in terms of its safety-relevant parameters (e.g. swelling pressure, hydraulic conductivity, diffusivity). The 7th Framework PEBS project (Long Term Performance of Engineered Barrier Systems) aims at providing in depth process understanding for constraining the conceptual and parametric uncertainties in the context of long-term safety assessment. As part of the PEBS project a series of laboratory and URL experiments are envisaged to describe the EBS behaviour after repository closure when resaturation is taking place. In this paper the very early post-closure period is targeted when the EBS is subjected to high temperatures and unsaturated conditions with a low but increasing moisture content. So far the detailed thermo-hydraulic behaviour of a bentonite EBS in a clay host rock has not been evaluated at a large scale in response to temperatures of up to 140°C at the canister surface, produced by HLW (and spent fuel), as anticipated in some of the designs considered. Furthermore, earlier THM experiments have shown that upscaling of thermal conductivity and its dependency on water content and/or humidity from the laboratory scale to a field scale needs further attention. This early post-closure thermal behaviour will be elucidated by the HE-E experiment, a 1:2 scale heating experiment setup at the Mont Terri rock laboratory, that started in June 2011. It will characterise in detail the thermal conductivity at a large scale in both pure bentonite as well as a bentonite-sand mixture, and in the Opalinus Clay host rock. The HE-E experiment is especially designed as a model validation experiment at the large scale and a modelling programme was launched in parallel to the different experimental steps. Scoping calculations were run to help the experimental design and prediction exercises taking the final design into account are foreseen. Calibration and prediction/validation will follow making use of the obtained THM dataset. This benchmarking of THM process models and codes should enhance confidence in the predictive capability of the recently developed numerical tools. It is the ultimate aim to be able to extrapolate the key parameters that might influence the fulfilment of the safety functions defined for the long term steady state.


Sign in / Sign up

Export Citation Format

Share Document