An Objective Method for Evaluating Next-Generation Sequencing Panels

2018 ◽  
Vol 34 (3) ◽  
pp. 139-143 ◽  
Author(s):  
Kaitlin Angione ◽  
Melissa Gibbons ◽  
Scott Demarest

Purpose: Next-generation sequencing panels are particularly useful in identifying genetic diagnoses in patients with nonspecific clinical findings by allowing for analysis of many genes at once. The purpose of this study was to develop a simple, objective system to evaluate the quality of available next-generation sequencing panels. Methods: A list of potentially important features of next-generation sequencing panels generated from the literature was evaluated for accessibility and objectivity and distilled to a “core” set of quality features. This was then applied to a clinical setting using the example of epilepsy panels. Panels at 8 laboratories were rated based on several objective measures to create a scoring system that differentiated between labs in a clinically meaningful way. Results: There was substantial variability in 6 “core” test criteria, allowing for creation of a scoring system that clearly distinguished labs based on identified strengths and weaknesses of each panel. Conclusion: We have demonstrated an objective method for comparing next-generation sequencing panels that can be applied or adapted to any clinical phenotype for which genetic testing is available. This method offers an unbiased approach to determining the ideal test for a given indication at a given time.

2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Mike J. Wilkinson ◽  
Claudia Szabo ◽  
Caroline S. Ford ◽  
Yuval Yarom ◽  
Adam E. Croxford ◽  
...  

PLoS ONE ◽  
2016 ◽  
Vol 11 (2) ◽  
pp. e0149393 ◽  
Author(s):  
Kendrick B. Turner ◽  
Jennifer Naciri ◽  
Jinny L. Liu ◽  
George P. Anderson ◽  
Ellen R. Goldman ◽  
...  

PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e11842
Author(s):  
Yen-Yi Liu ◽  
Bo-Han Chen ◽  
Chih-Chieh Chen ◽  
Chien-Shun Chiou

With the reduction in the cost of next-generation sequencing, whole-genome sequencing (WGS)–based methods such as core-genome multilocus sequence type (cgMLST) have been widely used. However, gene-based methods are required to assemble raw reads to contigs, thus possibly introducing errors into assemblies. Because the robustness of cgMLST depends on the quality of assemblies, the results of WGS should be assessed (from sequencing to assembly). In this study, we investigated the robustness of different read lengths, read depths, and assemblers in recovering genes from reference genomes. Different combinations of read lengths and read depths were simulated from the complete genomes of three common food-borne pathogens: Escherichia coli, Listeria monocytogenes, and Salmonella enterica. We found that the quality of assemblies was mainly affected by read depth, irrespective of the assembler used. In addition, we suggest several cutoff values for future cgMLST experiments. Furthermore, we recommend the combinations of read lengths, read depths, and assemblers that can result in a higher cost/performance ratio for cgMLST.


2017 ◽  
Author(s):  
Christoph Endrullat

Introduction 2nd generation sequencing or better known as next-generation sequencing (NGS) represents a cutting-edge technology in life sciences and current foundation for unravelling nucleotide sequences. Since advent of first platforms in 2005 the number of different types of NGS platforms increased in the last 10 years in the same manner as the variety of possible applications. Higher throughput, lower cost and better quality of data were the incentive for a range of enterprises developing new NGS devices, whereas economic issues and competitive pressure, based on expensive workflows of obsolete systems and decreasing cost of market leader platforms, resulted simultaneously in accelerated vanishing of several companies. Due to the fast development, NGS is currently characterized by a lack of standard operating procedures, quality management/quality assurance specifications, proficiency testing systems and even less approved standards along with high cost and uncertainty of data quality. On the one hand, appropriate standardization approaches were already performed by different initiatives and projects in the format of accreditation checklists, technical notes and guidelines for validation of NGS workflows. On the other hand, these approaches are exclusively located in the US due to origins of NGS overseas, therefore there exists an obvious lack of European-based standardization initiatives. An additional problem represents the validity of promising standards across different NGS applications. Due to highest demands and regulations in specific areas like clinical diagnostics, the same standards, which will be established there, will not be applicable or reasonable in other applications. These points emphasize the importance of standardization in NGS mainly addressing the laboratory workflows, which are the prerequisite and foundation for sufficient quality of downstream results. Methods This work was based on a platform-dependent and -independent systematic literature review as well as personal communications with i.a. Illumina, Inc., ISO/TC 276 as well as DIN NA 057-06-02 AA 'Biotechnology'. Results Prior formulation of specific standard proposals and collection of current de facto standards, the problems of standardization in NGS itself were identified and interpreted. Therefore, a variety of different standardization approaches and projects from organizations, societies and companies were reviewed. Conclusions There is already a distinct number of NGS standardization efforts present; however, the majority of approaches target the standardization of the bioinformatics processing pipeline in the context of “Big Data”. Therefore, an essential prerequisite is the simplification and standardization of wet laboratory workflows, because respective steps are directly affecting the final data quality and thus there exists the demand to formulate experimental procedures to ensure a sufficient final data output quality.


2012 ◽  
Vol 30 (11) ◽  
pp. 1033-1036 ◽  
Author(s):  
Amy S Gargis ◽  
Lisa Kalman ◽  
Meredith W Berry ◽  
David P Bick ◽  
David P Dimmock ◽  
...  

2017 ◽  
Vol 141 (11) ◽  
pp. 1544-1557 ◽  
Author(s):  
Sophia Yohe ◽  
Bharat Thyagarajan

Context.— Next-generation sequencing (NGS) is a technology being used by many laboratories to test for inherited disorders and tumor mutations. This technology is new for many practicing pathologists, who may not be familiar with the uses, methodology, and limitations of NGS. Objective.— To familiarize pathologists with several aspects of NGS, including current and expanding uses; methodology including wet bench aspects, bioinformatics, and interpretation; validation and proficiency; limitations; and issues related to the integration of NGS data into patient care. Data Sources.— The review is based on peer-reviewed literature and personal experience using NGS in a clinical setting at a major academic center. Conclusions.— The clinical applications of NGS will increase as the technology, bioinformatics, and resources evolve to address the limitations and improve quality of results. The challenge for clinical laboratories is to ensure testing is clinically relevant, cost-effective, and can be integrated into clinical care.


2021 ◽  
Vol 4 (11) ◽  
pp. e202101113
Author(s):  
Maximilian Sprang ◽  
Matteo Krüger ◽  
Miguel A Andrade-Navarro ◽  
Jean-Fred Fontaine

More and more next-generation sequencing (NGS) data are made available every day. However, the quality of this data is not always guaranteed. Available quality control tools require profound knowledge to correctly interpret the multiplicity of quality features. Moreover, it is usually difficult to know if quality features are relevant in all experimental conditions. Therefore, the NGS community would highly benefit from condition-specific data-driven guidelines derived from many publicly available experiments, which reflect routinely generated NGS data. In this work, we have characterized well-known quality guidelines and related features in big datasets and concluded that they are too limited for assessing the quality of a given NGS file accurately. Therefore, we present new data-driven guidelines derived from the statistical analysis of many public datasets using quality features calculated by common bioinformatics tools. Thanks to this approach, we confirm the high relevance of genome mapping statistics to assess the quality of the data, and we demonstrate the limited scope of some quality features that are not relevant in all conditions. Our guidelines are available at https://cbdm.uni-mainz.de/ngs-guidelines.


Author(s):  
Christoph Endrullat

Introduction 2nd generation sequencing or better known as next-generation sequencing (NGS) represents a cutting-edge technology in life sciences and current foundation for unravelling nucleotide sequences. Since advent of first platforms in 2005 the number of different types of NGS platforms increased in the last 10 years in the same manner as the variety of possible applications. Higher throughput, lower cost and better quality of data were the incentive for a range of enterprises developing new NGS devices, whereas economic issues and competitive pressure, based on expensive workflows of obsolete systems and decreasing cost of market leader platforms, resulted simultaneously in accelerated vanishing of several companies. Due to the fast development, NGS is currently characterized by a lack of standard operating procedures, quality management/quality assurance specifications, proficiency testing systems and even less approved standards along with high cost and uncertainty of data quality. On the one hand, appropriate standardization approaches were already performed by different initiatives and projects in the format of accreditation checklists, technical notes and guidelines for validation of NGS workflows. On the other hand, these approaches are exclusively located in the US due to origins of NGS overseas, therefore there exists an obvious lack of European-based standardization initiatives. An additional problem represents the validity of promising standards across different NGS applications. Due to highest demands and regulations in specific areas like clinical diagnostics, the same standards, which will be established there, will not be applicable or reasonable in other applications. These points emphasize the importance of standardization in NGS mainly addressing the laboratory workflows, which are the prerequisite and foundation for sufficient quality of downstream results. Methods This work was based on a platform-dependent and -independent systematic literature review as well as personal communications with i.a. Illumina, Inc., ISO/TC 276 as well as DIN NA 057-06-02 AA 'Biotechnology'. Results Prior formulation of specific standard proposals and collection of current de facto standards, the problems of standardization in NGS itself were identified and interpreted. Therefore, a variety of different standardization approaches and projects from organizations, societies and companies were reviewed. Conclusions There is already a distinct number of NGS standardization efforts present; however, the majority of approaches target the standardization of the bioinformatics processing pipeline in the context of “Big Data”. Therefore, an essential prerequisite is the simplification and standardization of wet laboratory workflows, because respective steps are directly affecting the final data quality and thus there exists the demand to formulate experimental procedures to ensure a sufficient final data output quality.


Sign in / Sign up

Export Citation Format

Share Document