scholarly journals Deconvolution of Nucleic-acid Length Distributions: A Gel Electrophoresis Analysis Tool and Applications

2019 ◽  
Author(s):  
Riccardo Ziraldo ◽  
Massa J. Shoura ◽  
Andrew Z. Fire ◽  
Stephen D. Levene

ABSTRACTNext-generation DNA-sequencing (NGS) technologies, which are designed to streamline the acquisition of massive amounts of sequencing data, are nonetheless dependent on various preparative steps to generate DNA fragments of required concentration, purity, and average size (molecular weight). Current automated electrophoresis systems for DNA- and RNA-sample quality control, such as Agilent’s Bioanalyzer ®and TapeStation ® products, are costly to acquire and use; they also provide limited information for samples having broad size distributions. Here we describe a software tool that helps determine the size distribution of DNA fragments in an NGS library, or other DNA sample, based on gel-electrophoretic line profiles. The software, developed as an ImageJ plug-in, allows for straightforward processing of gel images, including lane selection and fitting of univariate functions to intensity distributions. The user selects the option of fitting either discrete profiles in cases where discrete gel bands are visible, or continuous profiles, having multiple bands buried under a single broad peak. The method requires only modest imaging capabilities and is a cost-effective, rigorous alternative characterization method to augment existing techniques for library quality control.

2019 ◽  
Vol 47 (16) ◽  
pp. e92-e92
Author(s):  
Riccardo Ziraldo ◽  
Massa J Shoura ◽  
Andrew Z Fire ◽  
Stephen D Levene

Abstract Next-generation DNA-sequencing (NGS) technologies, which are designed to streamline the acquisition of massive amounts of sequencing data, are nonetheless dependent on various preparative steps to generate DNA fragments of required concentration, purity and average size (molecular weight). Current automated electrophoresis systems for DNA- and RNA-sample quality control, such as Agilent’s Bioanalyzer® and TapeStation® products, are costly to acquire and use; they also provide limited information for samples having broad size distributions. Here, we describe a software tool that helps determine the size distribution of DNA fragments in an NGS library, or other DNA sample, based on gel-electrophoretic line profiles. The software, developed as an ImageJ plug-in, allows for straightforward processing of gel images, including lane selection and fitting of univariate functions to intensity distributions. The user selects the option of fitting either discrete profiles in cases where discrete gel bands are visible or continuous profiles, having multiple bands buried under a single broad peak. The method requires only modest imaging capabilities and is a cost-effective, rigorous alternative characterization method to augment existing techniques for library quality control.


2019 ◽  
Vol 47 (W1) ◽  
pp. W166-W170 ◽  
Author(s):  
Julie Krainer ◽  
Andreas Weinhäusel ◽  
Karel Hanak ◽  
Walter Pulverer ◽  
Seza Özen ◽  
...  

Abstract DNA methylation is one of the major epigenetic modifications and has frequently demonstrated its suitability as diagnostic and prognostic biomarker. In addition to chip and sequencing based epigenome wide methylation profiling methods, targeted bisulfite sequencing (TBS) has been established as a cost-effective approach for routine diagnostics and target validation applications. Yet, an easy-to-use tool for the analysis of TBS data in combination with array-based methylation results has been missing. Consequently, we have developed EPIC-TABSAT, a user-friendly web-based application for the analysis of targeted sequencing data that additionally allows the integration of array-based methylation results. The tool can handle multiple targets as well as multiple sequencing files in parallel and covers the complete data analysis workflow from calculation of quality metrics to methylation calling and interactive result presentation. The graphical user interface offers an unprecedented way to interpret TBS data alone or in combination with array-based methylation studies. Together with the computation of target-specific epialleles it is useful in validation, research, and routine diagnostic environments. EPIC-TABSAT is freely accessible to all users at https://tabsat.ait.ac.at/.


2021 ◽  
Author(s):  
Jianhong Hu ◽  
Viktoriya Korchina ◽  
Hana Zouk ◽  
Maegan V. Harden ◽  
David Murdock ◽  
...  

Background: Next generation DNA sequencing (NGS) has been rapidly adopted by clinical testing laboratories for detection of germline and somatic genetic variants. The complexity of sample processing in a clinical DNA sequencing laboratory creates multiple opportunities for sample identification errors, demanding stringent quality control procedures. Methods: We utilized DNA genotyping via a 96-SNP PCR panel applied at sample acquisition in comparison to the final sequence, for tracking of sample identity throughout the sequencing pipeline. The 96-SNP PCR panel's inclusion of sex SNPs also provides a mechanism for a genotype-based comparison to recorded sex at sample collection for identification. This approach was implemented in the clinical genomic testing pathways, in the multi-center Electronic Medical Records and Genomics (eMERGE) Phase III program. Results: We identified 110 inconsistencies from 25,015 (0.44%) clinical samples, when comparing the 96-SNP PCR panel data to the test requisition-provided sex. The 96-SNP PCR panel genetic sex predictions were confirmed using additional SNP sites in the sequencing data or high-density hybridization-based genotyping arrays. Results identified clerical errors, samples from transgender participants and stem cell or bone marrow transplant patients and undetermined sample mix-ups. Conclusion: The 96-SNP PCR panel provides a cost-effective, robust tool for tracking samples within DNA sequencing laboratories, while the ability to predict sex from genotyping data provides an additional quality control measure for all procedures, beginning with sample collections. While not sufficient to detect all sample mix-ups, the inclusion of genetic versus reported sex matching can give estimates of the rate of errors in sample collection systems.


2018 ◽  
Author(s):  
Wang Xi ◽  
Yan Gao ◽  
Zhangyu Cheng ◽  
Chaoyun Chen ◽  
Maozhen Han ◽  
...  

ABSTRACTQuality control in next generation sequencing has become increasingly important as the technique becomes widely used. Tools have been developed for filtering possible contaminants in the sequencing data of species with known reference genome. Unfortunately, reference genomes for all the species involved, including the contaminants, are required for these tools to work. This precludes many real-life samples that have no information about the complete genome of the target species, and are contaminated with unknown microbial species.In this work we propose QC-Blind, a novel quality control pipeline for removing contaminants without any use of reference genomes. The pipeline requires only very little information from the marker genes of the target species. The entire pipeline consists of unsupervised read assembly, contig binning, read clustering and marker gene assignment.When evaluated onin silico,ab initioandin vivodatasets, QC-Blind proved effective in removing unknown contaminants with high specificity and accuracy, while preserving most of the genomic information of the target bacterial species. Therefore, QC-Blind could serve well in situations where limited information is available for both target and contamination species.IMPORTANCEAt present, many sequencing projects are still performed on potentially contaminated samples, which bring into question their accuracies. However, current reference-based quality control method are limited as they need either the genome of target species or contaminations. In this work we propose QC-Blind, a novel quality control pipeline for removing contaminants without any use of reference genomes. When evaluated onin silico,ab initioandin vivodatasets, QC-Blind proved effective in removing unknown contaminants with high specificity and accuracy, while preserving most of the genomic information of the target bacterial species. Therefore, QC-Blind is suitable for real-life samples where limited information is available for both target and contamination species.


2018 ◽  
Author(s):  
Zhiwu An ◽  
Fuzhou Gong ◽  
Yan Fu

We have developed PTMiner, a first software tool for automated, confident filtering, localization and annotation of protein post-translational modifications identified by open (mass-tolerant) search of large tandem mass spectrometry datasets. The performance of the software was validated on carefully designed simulation data. <br>


Author(s):  
Eric S Tvedte ◽  
Mark Gasser ◽  
Benjamin C Sparklin ◽  
Jane Michalski ◽  
Carl E Hjelmen ◽  
...  

Abstract The newest generation of DNA sequencing technology is highlighted by the ability to generate sequence reads hundreds of kilobases in length. Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (ONT) have pioneered competitive long read platforms, with more recent work focused on improving sequencing throughput and per-base accuracy. We used whole-genome sequencing data produced by three PacBio protocols (Sequel II CLR, Sequel II HiFi, RS II) and two ONT protocols (Rapid Sequencing and Ligation Sequencing) to compare assemblies of the bacteria Escherichia coli and the fruit fly Drosophila ananassae. In both organisms tested, Sequel II assemblies had the highest consensus accuracy, even after accounting for differences in sequencing throughput. ONT and PacBio CLR had the longest reads sequenced compared to PacBio RS II and HiFi, and genome contiguity was highest when assembling these datasets. ONT Rapid Sequencing libraries had the fewest chimeric reads in addition to superior quantification of E. coli plasmids versus ligation-based libraries. The quality of assemblies can be enhanced by adopting hybrid approaches using Illumina libraries for bacterial genome assembly or polishing eukaryotic genome assemblies, and an ONT-Illumina hybrid approach would be more cost-effective for many users. Genome-wide DNA methylation could be detected using both technologies, however ONT libraries enabled the identification of a broader range of known E. coli methyltransferase recognition motifs in addition to undocumented D. ananassae motifs. The ideal choice of long read technology may depend on several factors including the question or hypothesis under examination. No single technology outperformed others in all metrics examined.


Author(s):  
Anne Krogh Nøhr ◽  
Kristian Hanghøj ◽  
Genis Garcia Erill ◽  
Zilong Li ◽  
Ida Moltke ◽  
...  

Abstract Estimation of relatedness between pairs of individuals is important in many genetic research areas. When estimating relatedness, it is important to account for admixture if this is present. However, the methods that can account for admixture are all based on genotype data as input, which is a problem for low-depth next-generation sequencing (NGS) data from which genotypes are called with high uncertainty. Here we present a software tool, NGSremix, for maximum likelihood estimation of relatedness between pairs of admixed individuals from low-depth NGS data, which takes the uncertainty of the genotypes into account via genotype likelihoods. Using both simulated and real NGS data for admixed individuals with an average depth of 4x or below we show that our method works well and clearly outperforms all the commonly used state-of-the-art relatedness estimation methods PLINK, KING, relateAdmix, and ngsRelate that all perform quite poorly. Hence, NGSremix is a useful new tool for estimating relatedness in admixed populations from low-depth NGS data. NGSremix is implemented in C/C ++ in a multi-threaded software and is freely available on Github https://github.com/KHanghoj/NGSremix.


2014 ◽  
Vol 31 (7) ◽  
pp. 788-810 ◽  
Author(s):  
Claudia Paciarotti ◽  
Giovanni Mazzuto ◽  
Davide D’Ettorre

Purpose – The purpose of this paper is to propose a cost-effective, time-saving and easy-to-use failure modes and effects analysis (FMEA) system applied on the quality control of supplied products. The traditional FMEA has been modified and adapted to fit the quality control features and requirements. The paper introduces a new and revised FMEA approach, where the “failure concept” has been modified with “defect concept.” Design/methodology/approach – The typical FMEA parameters have been modified, and a non-linear scale has been introduced to better evaluate the FMEA parameters. In addition, two weight functions have been introduced in the risk priority number (RPN) calculus in order to consider different critical situations previously ignored and the RPN is assigned to several similar products in order to reduce the problem of complexity. Findings – A complete procedure is provided in order to assist managers in deciding on the critical suppliers, the creation of homogeneous families overcome the complexity of single product code approach, in RPN definition the relative importance of factors is evaluated. Originality/value – This different approach facilitates the quality control managers acting as a structured and “friendly” decision support system: the quality control manager can easily evaluate the critical situations and simulate different scenarios of corrective actions in order to choose the best one. This FMEA technique is a dynamic tool and the performed process is an iterative one. The method has been applied in a small medium enterprise producing hydro massage bathtub, shower, spas and that commercializes bathroom furniture. The firm application has been carried out involving a cross-functional and multidisciplinary team.


Author(s):  
Utkarsh Jain ◽  
CS Pundir ◽  
Shaivya Gupta ◽  
Nidhi Chauhan

Recent advancements in nanotechnology, for the biosynthesis of metal nanoparticles through enormous techniques, showed multidimensional developments. One among many facets of nanotechnology is to procure and adopt new advancements for green technology over chemical reduction synthesis. This adaptation for acquiring green nanotechnology leads us to a new dimension of nanobiotechnology. In order to imply one such efforts, in this study the emphasis is being laid on the synthesis of MgO nanoparticles using green technology and eliminating chemical reduction methods. Different characterization techniques such as UV–Vis spectroscopy, transmission electron microscopy, and dynamic light scattering were used to carry out the experiments. The average size of MgO nanoparticles were obtained in the range of 85–95 nm, when synthesized by various sources. The extracts of plants were capable of producing MgO nanoparticles efficiently and exhibited good results during cyclic voltammetry and electrochemical impedance spectroscopy study. The electrode modified with MgO nanoparticles (plant extract) showed good stability (90 days) and high conductivity. This study reports cost-effective and environment-friendly method for synthesis of MgO nanoparticles using plant extracts. The process is rapid, simple, and convenient and can be used as an alternative to chemical method.


Sign in / Sign up

Export Citation Format

Share Document