Abstract 628: Determinants of quality of next-generation sequencing output from the strand-specific TruSight Tumor Sequencing Panel in a clinical diagnostic setting

Author(s):  
Swati Garg ◽  
Mahadeo A. Sukhai ◽  
Mariam Thomas ◽  
Michelle Mah ◽  
Tong Zhang ◽  
...  
2020 ◽  
Vol 18 (9) ◽  
pp. 1150-1155
Author(s):  
Alexandra O. Sokolova ◽  
Brian H. Shirts ◽  
Eric Q. Konnick ◽  
Ginger J. Tsai ◽  
Bernardo H.L. Goulart ◽  
...  

With the promise and potential of clinical next-generation sequencing for tumor and germline testing to impact treatment and outcomes of patients with cancer, there are also risks of oversimplification, misinterpretation, and missed opportunities. These issues risk limiting clinical benefit and, at worst, perpetuating false conclusions that could lead to inappropriate treatment selection, avoidable toxicity, and harm to patients. This report presents 5 case studies illustrating challenges and opportunities in clinical next-generation sequencing interpretation and clinical application in solid tumor oncologic care. First is a case that dissects the origin of an ATM mutation as originating from a hematopoietic clone rather than the tumor. Second is a case illustrating the potential for tumor sequencing to suggest germline variants associated with a hereditary cancer syndrome. Third are 2 cases showing the potential for variant reclassification of a germline variant of uncertain significance when considered alongside family history and tumor sequencing results. Finally, we describe a case illustrating challenges with using microsatellite instability for predicting tumor response to immune checkpoint inhibitors. The common theme of the case studies is the importance of examining clinical context alongside expert review and interpretation, which together highlight an expanding role for contextual examination and multidisciplinary expert review through molecular tumor boards.


2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Mike J. Wilkinson ◽  
Claudia Szabo ◽  
Caroline S. Ford ◽  
Yuval Yarom ◽  
Adam E. Croxford ◽  
...  

PLoS ONE ◽  
2016 ◽  
Vol 11 (2) ◽  
pp. e0149393 ◽  
Author(s):  
Kendrick B. Turner ◽  
Jennifer Naciri ◽  
Jinny L. Liu ◽  
George P. Anderson ◽  
Ellen R. Goldman ◽  
...  

PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e11842
Author(s):  
Yen-Yi Liu ◽  
Bo-Han Chen ◽  
Chih-Chieh Chen ◽  
Chien-Shun Chiou

With the reduction in the cost of next-generation sequencing, whole-genome sequencing (WGS)–based methods such as core-genome multilocus sequence type (cgMLST) have been widely used. However, gene-based methods are required to assemble raw reads to contigs, thus possibly introducing errors into assemblies. Because the robustness of cgMLST depends on the quality of assemblies, the results of WGS should be assessed (from sequencing to assembly). In this study, we investigated the robustness of different read lengths, read depths, and assemblers in recovering genes from reference genomes. Different combinations of read lengths and read depths were simulated from the complete genomes of three common food-borne pathogens: Escherichia coli, Listeria monocytogenes, and Salmonella enterica. We found that the quality of assemblies was mainly affected by read depth, irrespective of the assembler used. In addition, we suggest several cutoff values for future cgMLST experiments. Furthermore, we recommend the combinations of read lengths, read depths, and assemblers that can result in a higher cost/performance ratio for cgMLST.


2017 ◽  
Author(s):  
Christoph Endrullat

Introduction 2nd generation sequencing or better known as next-generation sequencing (NGS) represents a cutting-edge technology in life sciences and current foundation for unravelling nucleotide sequences. Since advent of first platforms in 2005 the number of different types of NGS platforms increased in the last 10 years in the same manner as the variety of possible applications. Higher throughput, lower cost and better quality of data were the incentive for a range of enterprises developing new NGS devices, whereas economic issues and competitive pressure, based on expensive workflows of obsolete systems and decreasing cost of market leader platforms, resulted simultaneously in accelerated vanishing of several companies. Due to the fast development, NGS is currently characterized by a lack of standard operating procedures, quality management/quality assurance specifications, proficiency testing systems and even less approved standards along with high cost and uncertainty of data quality. On the one hand, appropriate standardization approaches were already performed by different initiatives and projects in the format of accreditation checklists, technical notes and guidelines for validation of NGS workflows. On the other hand, these approaches are exclusively located in the US due to origins of NGS overseas, therefore there exists an obvious lack of European-based standardization initiatives. An additional problem represents the validity of promising standards across different NGS applications. Due to highest demands and regulations in specific areas like clinical diagnostics, the same standards, which will be established there, will not be applicable or reasonable in other applications. These points emphasize the importance of standardization in NGS mainly addressing the laboratory workflows, which are the prerequisite and foundation for sufficient quality of downstream results. Methods This work was based on a platform-dependent and -independent systematic literature review as well as personal communications with i.a. Illumina, Inc., ISO/TC 276 as well as DIN NA 057-06-02 AA 'Biotechnology'. Results Prior formulation of specific standard proposals and collection of current de facto standards, the problems of standardization in NGS itself were identified and interpreted. Therefore, a variety of different standardization approaches and projects from organizations, societies and companies were reviewed. Conclusions There is already a distinct number of NGS standardization efforts present; however, the majority of approaches target the standardization of the bioinformatics processing pipeline in the context of “Big Data”. Therefore, an essential prerequisite is the simplification and standardization of wet laboratory workflows, because respective steps are directly affecting the final data quality and thus there exists the demand to formulate experimental procedures to ensure a sufficient final data output quality.


2012 ◽  
Vol 30 (11) ◽  
pp. 1033-1036 ◽  
Author(s):  
Amy S Gargis ◽  
Lisa Kalman ◽  
Meredith W Berry ◽  
David P Bick ◽  
David P Dimmock ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document