scholarly journals A Comprehensive Assessment of Somatic Mutation Calling in Cancer Genomes

2014 ◽  
Author(s):  
Tyler S Alioto ◽  
Sophia Derdak ◽  
Timothy A Beck ◽  
Paul C Boutros ◽  
Lawrence Bower ◽  
...  

The emergence of next generation DNA sequencing technology is enabling high-resolution cancer genome analysis. Large-scale projects like the International Cancer Genome Consortium (ICGC) are systematically scanning cancer genomes to identify recurrent somatic mutations. Second generation DNA sequencing, however, is still an evolving technology and procedures, both experimental and analytical, are constantly changing. Thus the research community is still defining a set of best practices for cancer genome data analysis, with no single protocol emerging to fulfil this role. Here we describe an extensive benchmark exercise to identify and resolve issues of somatic mutation calling. Whole genome sequence datasets comprising tumor-normal pairs from two different types of cancer, chronic lymphocytic leukaemia and medulloblastoma, were shared within the ICGC and submissions of somatic mutation calls were compared to verified mutations and to each other. Varying strategies to call mutations, incomplete awareness of sources of artefacts, and even lack of agreement on what constitutes an artefact or real mutation manifested in widely varying mutation call rates and somewhat low concordance among submissions. We conclude that somatic mutation calling remains an unsolved problem. However, we have identified many issues that are easy to remedy that are presented here. Our study highlights critical issues that need to be addressed before this valuable technology can be routinely used to inform clinical decision-making.

2013 ◽  
Vol 59 (1) ◽  
pp. 127-137 ◽  
Author(s):  
Nardin Samuel ◽  
Thomas J Hudson

BACKGROUND Sequencing of cancer genomes has become a pivotal method for uncovering and understanding the deregulated cellular processes driving tumor initiation and progression. Whole-genome sequencing is evolving toward becoming less costly and more feasible on a large scale; consequently, thousands of tumors are being analyzed with these technologies. Interpreting these data in the context of tumor complexity poses a challenge for cancer genomics. CONTENT The sequencing of large numbers of tumors has revealed novel insights into oncogenic mechanisms. In particular, we highlight the remarkable insight into the pathogenesis of breast cancers that has been gained through comprehensive and integrated sequencing analysis. The analysis and interpretation of sequencing data, however, must be considered in the context of heterogeneity within and among tumor samples. Only by adequately accounting for the underlying complexity of cancer genomes will the potential of genome sequencing be understood and subsequently translated into improved management of patients. SUMMARY The paradigm of personalized medicine holds promise if patient tumors are thoroughly studied as unique and heterogeneous entities and clinical decisions are made accordingly. Associated challenges will be ameliorated by continued collaborative efforts among research centers that coordinate the sharing of mutation, intervention, and outcomes data to assist in the interpretation of genomic data and to support clinical decision-making.


2021 ◽  
Vol 28 (1) ◽  
pp. e100251
Author(s):  
Ian Scott ◽  
Stacey Carter ◽  
Enrico Coiera

Machine learning algorithms are being used to screen and diagnose disease, prognosticate and predict therapeutic responses. Hundreds of new algorithms are being developed, but whether they improve clinical decision making and patient outcomes remains uncertain. If clinicians are to use algorithms, they need to be reassured that key issues relating to their validity, utility, feasibility, safety and ethical use have been addressed. We propose a checklist of 10 questions that clinicians can ask of those advocating for the use of a particular algorithm, but which do not expect clinicians, as non-experts, to demonstrate mastery over what can be highly complex statistical and computational concepts. The questions are: (1) What is the purpose and context of the algorithm? (2) How good were the data used to train the algorithm? (3) Were there sufficient data to train the algorithm? (4) How well does the algorithm perform? (5) Is the algorithm transferable to new clinical settings? (6) Are the outputs of the algorithm clinically intelligible? (7) How will this algorithm fit into and complement current workflows? (8) Has use of the algorithm been shown to improve patient care and outcomes? (9) Could the algorithm cause patient harm? and (10) Does use of the algorithm raise ethical, legal or social concerns? We provide examples where an algorithm may raise concerns and apply the checklist to a recent review of diagnostic imaging applications. This checklist aims to assist clinicians in assessing algorithm readiness for routine care and identify situations where further refinement and evaluation is required prior to large-scale use.


2005 ◽  
Vol 28 (2) ◽  
pp. 90-96 ◽  
Author(s):  
C. Pollock

Peritoneal sclerosis is an almost invariable consequence of peritoneal dialysis. In most circumstances it is “simple” sclerosis, manifesting clinically with an increasing peritoneal transport rate and loss of ultrafiltration capacity. In contrast, encapsulating peritoneal sclerosis is a life threatening and usually irreversible condition, associated with bowel obstruction, malnutrition and death. It is unknown whether common etiological factors underlie the development of these 2 clinically and pathologically distinct forms of peritoneal sclerosis. The majority of studies to date have investigated factors that contribute to “simple” sclerosis, although it remains possible that similar mechanisms are amplified in patients who develop encapsulated peritoneal sclerosis. The cellular elements that promote peritoneal sclerosis include the mesothelial cells, peritoneal fibroblasts and inflammatory cells. Factors that stimulate these cells to promote peritoneal fibrosis and neoangiogenesis, both inherent in the development of peritoneal sclerosis, include cytokines that are induced by exposure of the peritoneal membrane to high concentrations of glucose, advanced glycation of the peritoneal membrane and oxidative stress. The cumulative exposure to bioincompatible dialysate is likely to have an etiological role as the duration of dialysis correlates with the likelihood of developing peritoneal sclerosis. Indeed peritoneal dialysis using more biocompatible fluids has been shown to reduce the development of peritoneal sclerosis. The individual contribution of the factors implicated in the development of peritoneal sclerosis will only be determined by large scale peritoneal biopsy registries, which will be able to prospectively incorporate clinical and histological data and support clinical decision making.


2020 ◽  
Vol 3 (Supplement_1) ◽  
pp. 28-30
Author(s):  
A Kundra ◽  
T Ritchie ◽  
M Ropeleski

Abstract Background Fecal Calprotectin (FC) is helpful in distinguishing functional from organic bowel disease. Also, it has proven useful in monitoring disease activity in inflammatory bowel disease (IBD). The uptake of its use in clinical practice has increased considerably, though access varies significantly. Studies exploring current practice patterns among GI specialists and how to optimize its use are limited. In 2017, Kingston Health Sciences Centre (KHSC) began funding FC testing at no cost to patients. Aims We aimed to better understand practice patterns of gastroenterologists in IBD patients where there is in house access to FC assays, and to generate hypotheses regarding its optimal use in IBD monitoring. We hypothesize that FC is not being used in a regular manner for monitoring of IBD patients. Methods A retrospective chart audit study was done on all KHSC patients who had FC testing completed from 2017–2018. Qualitative data was gathered from dictated reports using rigorous set definitions regarding indication for the test, change in clinical decision making, and frequency patterns of testing. Specifically, change in use for colonoscopy or in medical therapy was coded only if the dictated note was clear that a decision hinged largely on the FC result. Frequency of testing was based on test order date. Reactive testing was coded as tests ordered to confirm a clinical flare. Variable testing was coded where monitoring tests that varied in intervals greater than 3 months and crossed over the other set frequency codes. Quantitative data regarding FC test values, and dates were also collected. This data was then analyzed using descriptive statistics. Results Of the 834 patients in our study, 7 were under 18 years old and excluded. 562(67.34%) of these patients had a pre-existing diagnosis of IBD; 193 (34%) with Ulcerative Colitis (UC), 369 (66%) with Crohn’s Disease (CD). FC testing changed the clinician’s decision for medical therapy in 12.82% of cases and use for colonoscopy 13.06% of the time for all comers. Of the FC tests, 79.8% were sent in a variable frequency pattern and 2.68% with reactive intent. The remaining 17.5% were monitored with a regular pattern, with 8.57% patients having their FC monitored at regular intervals greater than 6 months, 7.68% every 6 months, and 1.25% less than 6 months. The average FC level of patients with UC was 356.2ug/ml and 330.6 ug/ml for CD. The mean time interval from 1st to 2nd test was 189.6 days. Conclusions FC testing changed clinical decisions regarding medical therapy and use for colonoscopy about 13% of the time. FC testing was done variably 79.8% of the time, where as 17.5% of patients had a regular FC monitoring schedule. An optimal monitoring interval for IBD flares using FC for maximal clinical benefit has yet to be determined. Large scale studies will be required to answer this question. Funding Agencies None


2020 ◽  
Author(s):  
philippe delmas ◽  
Assunta fiorentino ◽  
matteo antonini ◽  
severine Vuilleumier ◽  
guy Stotzer ◽  
...  

Abstract Background: Patient safety is a top priority of the health professions. In emergency departments, the clinical decision making of triage nurses must be of the highest reliability. However, studies have repeatedly found that nurses over- or undertriage a considerable portion of cases, which can have major consequences for patient management. Among the factors that might explain this inaccuracy, workplace distractors have been pointed to without ever being the focus of specific investigation, owing in particular to the challenge of assessing them in care settings. Consequently, the use of a serious game reproducing a work environment comprising distractors would afford a unique opportunity to explore their impact on the quality of nurse emergency triage. Methods/Design : A factorial design will be used to test the acceptability and feasibility of a serious game created to explore the primary effects of distractors on emergency nurse triage accuracy. A sample of 80 emergency nurses will be randomised across three experimental groups exposed to different distractor conditions and one control group not exposed to distractors. Specifically, experimental group A will be exposed to noise distractors only; experimental group B to task interruptions only; and experimental group C to both types combined. Each group will engage in the serious game to complete 20 clinical vignettes in two hours. For each clinical vignette, a gold standard will be determined by experts. Pre-tests will be planned with clinicians and specialised emergency nurses to examine their interaction with the first version of the serious game. Discussion : This study will shed light on the acceptability and feasibility of a serious game in the field of emergency triage. It will also advance knowledge of the possible effects of exposure to common environmental distractors on nurse triage accuracy. Finally, this pilot study will inform planned large-scale studies of emergency nurse practice using serious games.


2021 ◽  
Author(s):  
V. Sah

The amount of druggable tumor-specific molecular aberrations has increased significantly over the last decade, with biomarker-matched therapies demonstrating a major survival advantage in many cancer forms. Therefore, molecular pathology has been critical not just for tumor detection and prognosis, but also for clinical decision-making in everyday practice. The advent of next-generation sequencing technology and the proliferation of large-scale tumor molecular profiling services through universities worldwide have transformed the area of precision oncology. When systematic genomic studies become more accessible in clinical and laboratory environments, healthcare professionals face the difficult challenge of outcome analysis and translation. This study summarizes existing and future methods to implementing precision cancer medicine, outlining the obstacles and possible strategies for facilitating the understanding and maximization of molecular profiling findings. Beyond tumor DNA sequencing, we discuss innovative molecular characterization techniques such as transcriptomics, immunophenotyping, epigenetic profiling, and single-cell analysis. Additionally, we discuss present and future uses of liquid biopsies for evaluating blood-based biomarkers such as circulating tumor cells and nucleic acids. Finally, the shortcomings of genotype-based treatments give insight into opportunities to extend personalized medicine beyond genomics.


Author(s):  
Joaquin Mateo ◽  
Johann S. de Bono

The aim of precision medicine is to select the best treatment option for each patient at the appropriate time in the natural history of the disease, based on understanding the molecular makeup of the tumor, with the ultimate objective of improving patient survival and quality of life. To achieve it, we must identify functionally distinct subtypes of cancers and, critically, have multiple therapy options available to match to these functional subtypes. As a result of the development of better and less costly next-generation sequencing assays, we can now interrogate the cancer genome, enabling us to use the DNA sequence itself for biomarker studies in drug development. The success of DNA-based biomarkers requires analytical validation and careful clinical qualification in prospective clinical trials. In this article, we review some of the challenges the scientific community is facing as a consequence of this sequencing revolution: reclassifying cancers based on biologic/phenotypic clusters relevant to clinical decision making; adapting how we conduct clinical trials; and adjusting our frameworks for regulatory approvals of biomarker technologies and drugs. Ultimately, we must ensure that this revolution can be safely implemented into routine clinical practice and benefit patients.


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Gurkamal Kaur ◽  
Jose Dominguez ◽  
Rosa Semaan ◽  
Leanne Fuentes ◽  
Jonathan Ogulnick ◽  
...  

Introduction: Subarachnoid hemorrhage (SAH) can be a devastating neurologic condition that leads to cardiac arrest (CA), and ultimately poor clinical outcomes. Existing literature on this subject reveal a dismal prognosis when analyzing relatively small sample sizes. We aimed to further elucidate the incidence, mortality rates, and outcomes of CA patients with SAH using large-scale population data. Methods: A retrospective cohort study was conducted using the National Inpatient Sample (NIS) database. Patients included in the study met criteria using International Classification of Diseases (ICD) codes 9th and 10th edition of: non-traumatic SAH, CA cause unspecified, and CA due to other underlying conditions between 2008 and 2014. For all regression analyses, a p-value of <0.05 was considered statistically significant. Results: We identified 170,869 patients hospitalized for non-traumatic SAH. Within these, there was a 3.17% incidence of CA. The mortality rate in CA with SAH was 82% (vs non-CA 18.4%, p< 0.001). Of the survivors of CA with SAH, 15.7% were discharged to special facilities and services (vs non-CA 37.6%, p<0.0001). The remaining 2.3% were discharged home (vs non-CA 44.0%, p<.0001). Higher NIS SAH severity score (NIS-SSS) was a predictor of CA in SAH patients (p <.0001). Patients treated with aneurysm clipping and coiling had lower odds ratio of CA (p <.0001). Conclusion: The study confirms the poor prognosis of patients with CA and SAH using large-scale population data. Patients that underwent aneurysm treatment show lower association with CA. Findings presented here provide useful data for clinical decision making and guiding goals of care discussion with family members. Further studies may identify interventions and protocols for treatment of these severely ill patients.


2014 ◽  
Vol 23 (01) ◽  
pp. 14-20 ◽  
Author(s):  
K. Verspoor ◽  
F. Martin-Sanchez

Summary Objectives: To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods:Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results: The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions: The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies.


2013 ◽  
Vol 31 (15) ◽  
pp. 1825-1833 ◽  
Author(s):  
Eliezer M. Van Allen ◽  
Nikhil Wagle ◽  
Mia A. Levy

The scale of tumor genomic profiling is rapidly outpacing human cognitive capacity to make clinical decisions without the aid of tools. New frameworks are needed to help researchers and clinicians process the information emerging from the explosive growth in both the number of tumor genetic variants routinely tested and the respective knowledge to interpret their clinical significance. We review the current state, limitations, and future trends in methods to support the clinical analysis and interpretation of cancer genomes. This includes the processes of genome-scale variant identification, including tools for sequence alignment, tumor–germline comparison, and molecular annotation of variants. The process of clinical interpretation of tumor variants includes classification of the effect of the variant, reporting the results to clinicians, and enabling the clinician to make a clinical decision based on the genomic information integrated with other clinical features. We describe existing knowledge bases, databases, algorithms, and tools for identification and visualization of tumor variants and their actionable subsets. With the decreasing cost of tumor gene mutation testing and the increasing number of actionable therapeutics, we expect the methods for analysis and interpretation of cancer genomes to continue to evolve to meet the needs of patient-centered clinical decision making. The science of computational cancer medicine is still in its infancy; however, there is a clear need to continue the development of knowledge bases, best practices, tools, and validation experiments for successful clinical implementation in oncology.


Sign in / Sign up

Export Citation Format

Share Document