scholarly journals Auto-qPCR; a python-based web app for automated and reproducible analysis of qPCR data

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gilles Maussion ◽  
Rhalena A. Thomas ◽  
Iveta Demirova ◽  
Gracia Gu ◽  
Eddie Cai ◽  
...  

AbstractQuantifying changes in DNA and RNA levels is essential in numerous molecular biology protocols. Quantitative real time PCR (qPCR) techniques have evolved to become commonplace, however, data analysis includes many time-consuming and cumbersome steps, which can lead to mistakes and misinterpretation of data. To address these bottlenecks, we have developed an open-source Python software to automate processing of result spreadsheets from qPCR machines, employing calculations usually performed manually. Auto-qPCR is a tool that saves time when computing qPCR data, helping to ensure reproducibility of qPCR experiment analyses. Our web-based app (https://auto-q-pcr.com/) is easy to use and does not require programming knowledge or software installation. Using Auto-qPCR, we provide examples of data treatment, display and statistical analyses for four different data processing modes within one program: (1) DNA quantification to identify genomic deletion or duplication events; (2) assessment of gene expression levels using an absolute model, and relative quantification (3) with or (4) without a reference sample. Our open access Auto-qPCR software saves the time of manual data analysis and provides a more systematic workflow, minimizing the risk of errors. Our program constitutes a new tool that can be incorporated into bioinformatic and molecular biology pipelines in clinical and research labs.

2021 ◽  
Author(s):  
Gilles Maussion ◽  
Rhalena A. Thomas ◽  
Iveta Demirova ◽  
Gracia Gu ◽  
Eddie Cai ◽  
...  

AbstractQuantifying changes in DNA and RNA levels is an essential component of any molecular biology toolkit. Quantitative real time PCR (qPCR) techniques, in both clinical and basic research labs, have evolved to become both routine and standardized. However, the analysis of qPCR data includes many steps that are time consuming and cumbersome, which can lead to mistakes and misinterpretation of data. To address this bottleneck, we have developed an open source software, written in Python, to automate the processing of csv output files from any qPCR machine, using standard calculations that are usually performed manually. Auto-qPCR is a tool that saves time when computing this type of data, helping to ensure standardization of qPCR experiment analyses. Unlike other software packages that process qPCR data, our web-based app (http://auto-q-pcr.com/) is easy to use and does not require programming knowledge or software installation. Additionally, we provide examples of four different data processing modes within one program: (1) cDNA quantification to identify genomic deletion or duplication events, (2) assessment of gene expression levels using an absolute model, (3) relative quantification, and (4) relative quantification with a reference sample. Auto-qPCR also includes options for statistical analysis of the data. Using this software, we performed analysis of differential gene expression following an initial data processing and provide graphs of the findings prepared through the Auto-qPCR program. Thus, our open access Auto-qPCR software saves the time of manual data analysis and provides a more systematic workflow, minimizing the risk of errors when done manually. Our program constitutes a new tool that can be incorporated into bioinformatic and molecular biology pipelines in clinical and research labs.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Delphine Larivière ◽  
Laura Wickham ◽  
Kenneth Keiler ◽  
Anton Nekrutenko ◽  

Abstract Background Significant progress has been made in advancing and standardizing tools for human genomic and biomedical research. Yet, the field of next-generation sequencing (NGS) analysis for microorganisms (including multiple pathogens) remains fragmented, lacks accessible and reusable tools, is hindered by local computational resource limitations, and does not offer widely accepted standards. One such “problem areas” is the analysis of Transposon Insertion Sequencing (TIS) data. TIS allows probing of almost the entire genome of a microorganism by introducing random insertions of transposon-derived constructs. The impact of the insertions on the survival and growth under specific conditions provides precise information about genes affecting specific phenotypic characteristics. A wide array of tools has been developed to analyze TIS data. Among the variety of options available, it is often difficult to identify which one can provide a reliable and reproducible analysis. Results Here we sought to understand the challenges and propose reliable practices for the analysis of TIS experiments. Using data from two recent TIS studies, we have developed a series of workflows that include multiple tools for data de-multiplexing, promoter sequence identification, transposon flank alignment, and read count repartition across the genome. Particular attention was paid to quality control procedures, such as determining the optimal tool parameters for the analysis and removal of contamination. Conclusions Our work provides an assessment of the currently available tools for TIS data analysis. It offers ready to use workflows that can be invoked by anyone in the world using our public Galaxy platform (https://usegalaxy.org). To lower the entry barriers, we have also developed interactive tutorials explaining details of TIS data analysis procedures at https://bit.ly/gxy-tis.


2001 ◽  
Vol 114 (10) ◽  
pp. 1797-1798
Author(s):  
K. Plant

Essential Molecular Biology, Vol. 1, 2nd edn edited by T. A. Brown Oxford University Press (2000) 240 pages. ISBN 0–19-963642-7 pound29.95 I have heard it said (though not to my face) that practical molecular biology is somewhat akin to cookery, and I have to admit (though not to my fellow Molecular Biologists) that there is an element of truth in this. Of course, our ovens are cooler, and our pie dishes smaller, but often it is a case of mixing ingredients in the right proportions and baking at 37 degrees C for an hour. In this book Brown becomes the Delia Smith of molecular biology, starting with how to boil an egg, before proceeding to more complex recipes. It is utterly and unashamedly aimed at the complete novice. As more and more branches of biology use molecular techniques, and as a constant flow of graduates take up the yoke of research, there will always be a demand for this type of manual. Of course, it is possible to find variants of a lot of these methods on the Internet, but, as these often include only the protocol, the complete novice is probably better off with a specifically designed manual. Another option is to go for the kit approach, but, in the same way as opening a tin of beans doesn't make you a chef, I whole heartedly agree with Brown when he says, ‘do not get the idea that using kits is the same as being a molecular biologist.’ In most branches of biology a bit of genuine molecular expertise can only enhance one's future job prospects! One of the things I liked about this book is its no-nonsense style, particularly those chapters written by the Editor. There is plenty of sound advice, not just on the molecular techniques but on how to be a good scientist in general. Although the advice starts with the basics, it isn't patronising to those experienced in other fields. One piece of advice that particularly tickled me was that, if your hand is too unsteady to load a gel, you should give up caffeine; I'm not sure whether the pain would be worth the gain! The first chapter deals with all the basic issues, from planning (not just how to do it, but is it worthy of your time, which is something we should all think about occasionally) to safety (which nasties you'll be using, what precautions to take, with internet sites referenced to fill in the details) and what equipment you'll need to run the experiments. In a nutshell, the rest of the book deals with microbiology for molecular biologists and molecular biology for everyone else. This includes DNA and RNA isolation, electrophoresis and cloning (generating, propagating and identifying recombinant DNA molecules, not the Dolly-the-sheep variety). There is a second volume to the set, which (based on the contents of the first edition) should cover making and screening libraries, the polymerase chain reaction, sequencing and gene expression studies. Bear in mind that to get very far you will need to buy the second volume, which is not yet published. It has been more than a decade since the first edition of this well-known and respected manual was published; so one would think its first update is about due. However, compared with the first edition, most chapters have very few changes. This is probably in the nature of such a basic manual - for example, good microbial practice doesn't change much. Only a couple of chapters have been extensively rewritten; those describing DNA extraction now include more recent resin-based methods. So to the crux of the matter: would I recommend buying it? Well, if you're a complete novice with little backup, I definitely think it is worth investing in a decent manual, and this one does have a nice comfortable feel to it. If you've already got a copy of the previous edition and are wondering whether to upgrade, I would say that the few improvements in these very basic techniques do not really make it worth spending the pound30 that this volume costs. That said, I rather suspect that the second volume, which deals with more complex techniques, will show far more technical advances and should complete your progression from culinary incompetence to cordon bleu.


2019 ◽  
Vol 26 (3) ◽  
pp. 165-171
Author(s):  
Jean-Marc Cavaillon

André Boivin (1895–1949) started his career in Marseille as a biochemist. Soon after the discovery of insulin, he worked on its purification, allowing for the treatment of local patients. He later moved to Strasbourg and set-up a microtitration technique of small carbon molecules and a method for quantifying purine and pyrimidine bases. His main scientific contribution occurred in Bucharest, where he was recruited to organize the teaching of medicinal chemistry. Together with Ion and Lydia Mesrobeanu, at the Cantacuzene Institute, they were the first to characterize the biochemical nature of endotoxins, which he termed the “glucido-lipidic antigen.” After joining the Institut Pasteur annex near Paris, he worked with Gaston Ramon pursuing his research on smooth and rough LPS. Additionally, with Albert Delaunay, he researched the formation of exotoxins and antibodies (Abs). He was nominated assistant-director of the Institut Pasteur in 1940. He initiated research on bacterial DNA and RNA, and was the first to hypothesize on how RNA fits into gene function. In 1947 he moved for a second time to Strasbourg, accepting a position as a Professor of Biological Chemistry. After his premature death at the age of 54, the French academies mourned his loss and recognized him as one of their outstanding masters of biochemistry, microbiology, immunology, and molecular biology.


2012 ◽  
Vol 2012 ◽  
pp. 1-24 ◽  
Author(s):  
Vaishali Katju

The gene duplication process has exhibited far greater promiscuity in the creation of paralogs with novel exon-intron structures than anticipated even by Ohno. In this paper I explore the history of the field, from the neo-Darwinian synthesis through Ohno’s formulation of the canonical model for the evolution of gene duplicates and culminating in the present genomic era. I delineate the major tenets of Ohno’s model and discuss its failure to encapsulate the full complexity of the duplication process as revealed in the era of genomics. I discuss the diverse classes of paralogs originating from both DNA- and RNA-mediated duplication events and their evolutionary potential for assuming radically altered functions, as well as the degree to which they can function unconstrained from the pressure of gene conversion. Lastly, I explore theoretical population-genetic considerations of how the effective population size (Ne) of a species may influence the probability of emergence of genes with radically altered functions.


2018 ◽  
Author(s):  
Yulia Panina ◽  
Arno Germond ◽  
Brit G. David ◽  
Tomonobu M. Watanabe ◽  

ABSTRACTThe real-time quantitative polymerase chain reaction (qPCR) is routinely used for quantification of nucleic acids and is considered the gold standard in the field of relative nucleic acid measurements. The efficiency of the qPCR reaction is one of the most important parameters that needs to be determined, reported, and incorporated into data analysis in any qPCR experiment. The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines recognize the calibration curve as the method of choice for estimation of qPCR efficiency. The precision of this method has been reported to be between SD=0.007 (3 replicates) and SD=0.022 (no replicates). In this manuscript we present a novel approach to analysing qPCR data obtained by running a dilution series. Unlike previously developed methods, our method relies on a new formula that describes pairwise relationships between data points on separate amplification curves and thus operates extensive statistics (hundreds of estimations). The comparison of our method with classical calibration curve by Monte Carlo simulation shows that our approach can almost double the precision of efficiency and gene expression ratio estimations on the same dataset.


2021 ◽  
Author(s):  
Stefan Buck ◽  
Lukas Pekarek ◽  
Neva Caliskan

Optical tweezers is a single-molecule technique that allows probing of intra- and intermolecular interactions that govern complex biological processes involving molecular motors, protein-nucleic acid interactions and protein/RNA folding. Recent developments in instrumentation eased and accelerated optical tweezers data acquisition, but analysis of the data remains challenging. Here, to enable high-throughput data analysis, we developed an automated python-based analysis pipeline called POTATO (Practical Optical Tweezers Analysis TOol). POTATO automatically processes the high-frequency raw data generated by force-ramp experiments and identifies (un)folding events using predefined parameters. After segmentation of the force-distance trajectories at the identified (un)folding events, sections of the curve can be fitted independently to worm-like chain and freely-jointed chain models, and the work applied on the molecule can be calculated by numerical integration. Furthermore, the tool allows plotting of constant force data and fitting of the Gaussian distance distribution over time. All these features are wrapped in a user-friendly graphical interface (https://github.com/REMI-HIRI/POTATO), which allows researchers without programming knowledge to perform sophisticated data analysis.


2017 ◽  
Vol 3 ◽  
pp. e129 ◽  
Author(s):  
Bruno Contrino ◽  
Eric Miele ◽  
Ronald Tomlinson ◽  
M. Paola Castaldi ◽  
Piero Ricchiuto

Background Mass Spectrometry (MS) based chemoproteomics has recently become a main tool to identify and quantify cellular target protein interactions with ligands/drugs in drug discovery. The complexity associated with these new types of data requires scientists with a limited computational background to perform systematic data quality controls as well as to visualize the results derived from the analysis to enable rapid decision making. To date, there are no readily accessible platforms specifically designed for chemoproteomics data analysis. Results We developed a Shiny-based web application named DOSCHEDA (Down Stream Chemoproteomics Data Analysis) to assess the quality of chemoproteomics experiments, to filter peptide intensities based on linear correlations between replicates, and to perform statistical analysis based on the experimental design. In order to increase its accessibility, DOSCHEDA is designed to be used with minimal user input and it does not require programming knowledge. Typical inputs can be protein fold changes or peptide intensities obtained from Proteome Discover, MaxQuant or other similar software. DOSCHEDA aggregates results from bioinformatics analyses performed on the input dataset into a dynamic interface, it encompasses interactive graphics and enables customized output reports. Conclusions DOSCHEDA is implemented entirely in R language. It can be launched by any system with R installed, including Windows, Mac OS and Linux distributions. DOSCHEDA is hosted on a shiny-server at https://doscheda.shinyapps.io/doscheda and is also available as a Bioconductor package (http://www.bioconductor.org/).


Sign in / Sign up

Export Citation Format

Share Document