scholarly journals Evaluation of TagSeq, a reliable low-cost alternative for RNAseq

2016 ◽  
Author(s):  
Brian Keith Lohman ◽  
Jesse N Weber ◽  
Daniel I Bolnick

RNAseq is a relatively new tool for ecological genetics that offers researchers insight into changes in gene expression in response to a myriad of natural or experimental conditions. However, standard RNAseq methods (e.g., Illumina TruSeq® or NEBNext®) can be cost prohibitive, especially when study designs require large sample sizes. Consequently, RNAseq is often underused as a method, or is applied to small sample sizes that confer poor statistical power. Low cost RNAseq methods could therefore enable far greater and more powerful applications of transcriptomics in ecological genetics and beyond. Standard mRNAseq is costly partly because one sequences portions of the full length of all transcripts. Such whole-mRNA data is redundant for estimates of relative gene expression. TagSeq is an alternative method that focuses sequencing effort on mRNAs 3-prime end, thereby reducing the necessary sequencing depth per sample, and thus cost. Here we present a revised TagSeq protocol, and compare its performance against NEBNext®, the gold-standard whole mRNAseq method. We built both TagSeq and NEBNext® libraries from the same biological samples, each spiked with control RNAs. We found that TagSeq measured the control RNA distribution more accurately than NEBNext®, for a fraction of the cost per sample (~10%). The higher accuracy of TagSeq was particularly apparent for transcripts of moderate to low abundance. Technical replicates of TagSeq libraries are highly correlated, and were correlated with NEBNext® results. Overall, we show that our modified TagSeq protocol is an efficient alternative to traditional whole mRNAseq, offering researchers comparable data at greatly reduced cost.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


2016 ◽  
Vol 2 (1) ◽  
pp. 41-54
Author(s):  
Ashleigh Saunders ◽  
Karen E. Waldie

Purpose – Autism spectrum disorder (ASD) is a lifelong neurodevelopmental condition for which there is no known cure. The rate of psychiatric comorbidity in autism is extremely high, which raises questions about the nature of the co-occurring symptoms. It is unclear whether these additional conditions are true comorbid conditions, or can simply be accounted for through the ASD diagnosis. The paper aims to discuss this issue. Design/methodology/approach – A number of questionnaires and a computer-based task were used in the current study. The authors asked the participants about symptoms of ASD, attention deficit hyperactivity disorder (ADHD) and anxiety, as well as overall adaptive functioning. Findings – The results demonstrate that each condition, in its pure form, can be clearly differentiated from one another (and from neurotypical controls). Further analyses revealed that when ASD occurs together with anxiety, anxiety appears to be a separate condition. In contrast, there is no clear behavioural profile for when ASD and ADHD co-occur. Research limitations/implications – First, due to small sample sizes, some analyses performed were targeted to specific groups (i.e. comparing ADHD, ASD to comorbid ADHD+ASD). Larger sample sizes would have given the statistical power to perform a full scale comparative analysis of all experimental groups when split by their comorbid conditions. Second, males were over-represented in the ASD group and females were over-represented in the anxiety group, due to the uneven gender balance in the prevalence of these conditions. Lastly, the main profiling techniques used were questionnaires. Clinical interviews would have been preferable, as they give a more objective account of behavioural difficulties. Practical implications – The rate of psychiatric comorbidity in autism is extremely high, which raises questions about the nature of the co-occurring symptoms. It is unclear whether these additional conditions are true comorbid conditions, or can simply be accounted for through the ASD diagnosis. Social implications – This information will be important, not only to healthcare practitioners when administering a diagnosis, but also to therapists who need to apply evidence-based treatment to comorbid and stand-alone conditions. Originality/value – This study is the first to investigate the nature of co-existing conditions in ASD in a New Zealand population.


2019 ◽  
Vol 147 (2) ◽  
pp. 763-769 ◽  
Author(s):  
D. S. Wilks

Abstract Quantitative evaluation of the flatness of the verification rank histogram can be approached through formal hypothesis testing. Traditionally, the familiar χ2 test has been used for this purpose. Recently, two alternatives—the reliability index (RI) and an entropy statistic (Ω)—have been suggested in the literature. This paper presents approximations to the sampling distributions of these latter two rank histogram flatness metrics, and compares the statistical power of tests based on the three statistics, in a controlled setting. The χ2 test is generally most powerful (i.e., most sensitive to violations of the null hypothesis of rank uniformity), although for overdispersed ensembles and small sample sizes, the test based on the entropy statistic Ω is more powerful. The RI-based test is preferred only for unbiased forecasts with small ensembles and very small sample sizes.


2015 ◽  
Vol 13 (04) ◽  
pp. 1550018 ◽  
Author(s):  
Kevin Lim ◽  
Zhenhua Li ◽  
Kwok Pui Choi ◽  
Limsoon Wong

Transcript-level quantification is often measured across two groups of patients to aid the discovery of biomarkers and detection of biological mechanisms involving these biomarkers. Statistical tests lack power and false discovery rate is high when sample size is small. Yet, many experiments have very few samples (≤ 5). This creates the impetus for a method to discover biomarkers and mechanisms under very small sample sizes. We present a powerful method, ESSNet, that is able to identify subnetworks consistently across independent datasets of the same disease phenotypes even under very small sample sizes. The key idea of ESSNet is to fragment large pathways into smaller subnetworks and compute a statistic that discriminates the subnetworks in two phenotypes. We do not greedily select genes to be included based on differential expression but rely on gene-expression-level ranking within a phenotype, which is shown to be stable even under extremely small sample sizes. We test our subnetworks on null distributions obtained by array rotation; this preserves the gene–gene correlation structure and is suitable for datasets with small sample size allowing us to consistently predict relevant subnetworks even when sample size is small. For most other methods, this consistency drops to less than 10% when we test them on datasets with only two samples from each phenotype, whereas ESSNet is able to achieve an average consistency of 58% (72% when we consider genes within the subnetworks) and continues to be superior when sample size is large. We further show that the subnetworks identified by ESSNet are highly correlated to many references in the biological literature. ESSNet and supplementary material are available at: http://compbio.ddns.comp.nus.edu.sg:8080/essnet .


2021 ◽  
pp. 016327872110243
Author(s):  
Donna Chen ◽  
Matthew S. Fritz

Although the bias-corrected (BC) bootstrap is an often-recommended method for testing mediation due to its higher statistical power relative to other tests, it has also been found to have elevated Type I error rates with small sample sizes. Under limitations for participant recruitment, obtaining a larger sample size is not always feasible. Thus, this study examines whether using alternative corrections for bias in the BC bootstrap test of mediation for small sample sizes can achieve equal levels of statistical power without the associated increase in Type I error. A simulation study was conducted to compare Efron and Tibshirani’s original correction for bias, z 0, to six alternative corrections for bias: (a) mean, (b–e) Winsorized mean with 10%, 20%, 30%, and 40% trimming in each tail, and (f) medcouple (robust skewness measure). Most variation in Type I error (given a medium effect size of one regression slope and zero for the other slope) and power (small effect size in both regression slopes) was found with small sample sizes. Recommendations for applied researchers are made based on the results. An empirical example using data from the ATLAS drug prevention intervention study is presented to illustrate these results. Limitations and future directions are discussed.


2019 ◽  
Vol 35 (20) ◽  
pp. 3996-4003
Author(s):  
Insha Ullah ◽  
Sudhir Paul ◽  
Zhenjie Hong ◽  
You-Gan Wang

Abstract Motivation Under two biologically different conditions, we are often interested in identifying differentially expressed genes. It is usually the case that the assumption of equal variances on the two groups is violated for many genes where a large number of them are required to be filtered or ranked. In these cases, exact tests are unavailable and the Welch’s approximate test is most reliable one. The Welch’s test involves two layers of approximations: approximating the distribution of the statistic by a t-distribution, which in turn depends on approximate degrees of freedom. This study attempts to improve upon Welch’s approximate test by avoiding one layer of approximation. Results We introduce a new distribution that generalizes the t-distribution and propose a Monte Carlo based test that uses only one layer of approximation for statistical inferences. Experimental results based on extensive simulation studies show that the Monte Carol based tests enhance the statistical power and performs better than Welch’s t-approximation, especially when the equal variance assumption is not met and the sample size of the sample with a larger variance is smaller. We analyzed two gene-expression datasets, namely the childhood acute lymphoblastic leukemia gene-expression dataset with 22 283 genes and Golden Spike dataset produced by a controlled experiment with 13 966 genes. The new test identified additional genes of interest in both datasets. Some of these genes have been proven to play important roles in medical literature. Availability and implementation R scripts and the R package mcBFtest is available in CRAN and to reproduce all reported results are available at the GitHub repository, https://github.com/iullah1980/MCTcodes. Supplementary information Supplementary data is available at Bioinformatics online.


2009 ◽  
Vol 4 (3) ◽  
pp. 294-298 ◽  
Author(s):  
Tal Yarkoni

Vul, Harris, Winkielman, and Pashler (2009) , (this issue) argue that correlations in many cognitive neuroscience studies are grossly inflated due to a widespread tendency to use nonindependent analyses. In this article, I argue that Vul et al.'s primary conclusion is correct, but for different reasons than they suggest. I demonstrate that the primary cause of grossly inflated correlations in whole-brain fMRI analyses is not nonindependence, but the pernicious combination of small sample sizes and stringent alpha-correction levels. Far from defusing Vul et al.'s conclusions, the simulations presented suggest that the level of inflation may be even worse than Vul et al.'s empirical analysis would suggest.


2019 ◽  
Vol 18 (1) ◽  
Author(s):  
Susanne H. Hodgson ◽  
Julius Muller ◽  
Helen E. Lockstone ◽  
Adrian V. S. Hill ◽  
Kevin Marsh ◽  
...  

Abstract Background Transcriptional profiling of the human immune response to malaria has been used to identify diagnostic markers, understand the pathogenicity of severe disease and dissect the mechanisms of naturally acquired immunity (NAI). However, interpreting this body of work is difficult given considerable variation in study design, definition of disease, patient selection and methodology employed. This work details a comprehensive review of gene expression profiling (GEP) of the human immune response to malaria to determine how this technology has been applied to date, instances where this has advanced understanding of NAI and the extent of variability in methodology between studies to allow informed comparison of data and interpretation of results. Methods Datasets from the gene expression omnibus (GEO) including the search terms; ‘plasmodium’ or ‘malaria’ or ‘sporozoite’ or ‘merozoite’ or ‘gametocyte’ and ‘Homo sapiens’ were identified and publications analysed. Datasets of gene expression changes in relation to malaria vaccines were excluded. Results Twenty-three GEO datasets and 25 related publications were included in the final review. All datasets related to Plasmodium falciparum infection, except two that related to Plasmodium vivax infection. The majority of datasets included samples from individuals infected with malaria ‘naturally’ in the field (n = 13, 57%), however some related to controlled human malaria infection (CHMI) studies (n = 6, 26%), or cells stimulated with Plasmodium in vitro (n = 6, 26%). The majority of studies examined gene expression changes relating to the blood stage of the parasite. Significant heterogeneity between datasets was identified in terms of study design, sample type, platform used and method of analysis. Seven datasets specifically investigated transcriptional changes associated with NAI to malaria, with evidence supporting suppression of the innate pro-inflammatory response as an important mechanism for this in the majority of these studies. However, further interpretation of this body of work was limited by heterogeneity between studies and small sample sizes. Conclusions GEP in malaria is a potentially powerful tool, but to date studies have been hypothesis generating with small sample sizes and widely varying methodology. As CHMI studies are increasingly performed in endemic settings, there will be growing opportunity to use GEP to understand detailed time-course changes in host response and understand in greater detail the mechanisms of NAI.


Sign in / Sign up

Export Citation Format

Share Document