scholarly journals CHIPIN: ChIP-seq inter-sample normalization based on signal invariance across transcriptionally constant genes

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Lélia Polit ◽  
Gwenneg Kerdivel ◽  
Sebastian Gregoricchio ◽  
Michela Esposito ◽  
Christel Guillouf ◽  
...  

Abstract Background Multiple studies rely on ChIP-seq experiments to assess the effect of gene modulation and drug treatments on protein binding and chromatin structure. However, most methods commonly used for the normalization of ChIP-seq binding intensity signals across conditions, e.g., the normalization to the same number of reads, either assume a constant signal-to-noise ratio across conditions or base the estimates of correction factors on genomic regions with intrinsically different signals between conditions. Inaccurate normalization of ChIP-seq signal may, in turn, lead to erroneous biological conclusions. Results We developed a new R package, CHIPIN, that allows normalizing ChIP-seq signals across different conditions/samples when spike-in information is not available, but gene expression data are at hand. Our normalization technique is based on the assumption that, on average, no differences in ChIP-seq signals should be observed in the regulatory regions of genes whose expression levels are constant across samples/conditions. In addition to normalizing ChIP-seq signals, CHIPIN provides as output a number of graphs and calculates statistics allowing the user to assess the efficiency of the normalization and qualify the specificity of the antibody used. In addition to ChIP-seq, CHIPIN can be used without restriction on open chromatin ATAC-seq or DNase hypersensitivity data. We validated the CHIPIN method on several ChIP-seq data sets and documented its superior performance in comparison to several commonly used normalization techniques. Conclusions The CHIPIN method provides a new way for ChIP-seq signal normalization across conditions when spike-in experiments are not available. The method is implemented in a user-friendly R package available on GitHub: https://github.com/BoevaLab/CHIPIN

SAGE Open ◽  
2016 ◽  
Vol 6 (4) ◽  
pp. 215824401666822 ◽  
Author(s):  
Simon Grund ◽  
Oliver Lüdtke ◽  
Alexander Robitzsch

The treatment of missing data can be difficult in multilevel research because state-of-the-art procedures such as multiple imputation (MI) may require advanced statistical knowledge or a high degree of familiarity with certain statistical software. In the missing data literature, pan has been recommended for MI of multilevel data. In this article, we provide an introduction to MI of multilevel missing data using the R package pan, and we discuss its possibilities and limitations in accommodating typical questions in multilevel research. To make pan more accessible to applied researchers, we make use of the mitml package, which provides a user-friendly interface to the pan package and several tools for managing and analyzing multiply imputed data sets. We illustrate the use of pan and mitml with two empirical examples that represent common applications of multilevel models, and we discuss how these procedures may be used in conjunction with other software.


Author(s):  
Henrik Baktoft ◽  
Karl Ø. Gjelland ◽  
Finn Økland ◽  
Jennifer S. Rehage ◽  
Jonathan R. Rodemann ◽  
...  

AbstractThe R package yaps was introduced in 2017 as a transparent open source alternative to closed source manufacturer-provided solutions to estimate positions of fish (and other aquatic animals) tagged with acoustic transmitters.Although yaps is open source and transparent, the process from raw detections to final tracks has proved to be challenging for many potential users, effectively preventing most users from accessing the benefits of using yaps. Especially, the very important process of synchronizing the hydrophone arrays have proven to be an obstacle for many potential users.To make yaps more approachable to the wider fish tracking community, we have developed and added user-friendly functions assisting users in the complex process of synchronizing the data.Here, we introduce these functions and a six-step protocol intended to provide users with an example workflow that can be used as a template enabling users to apply yaps to their own data. Using example data collected by an array of Vemco VR2 hydrophones, the protocol walks the user through the entire process from raw data to final tracks. Example data sets and complete code for reproducing results are provided.


2021 ◽  
pp. 1250-1258
Author(s):  
Yilin Wu ◽  
Huei-Chung Huang ◽  
Li-Xuan Qin

PURPOSE Accurate assessment of a molecular classifier that guides patient care is of paramount importance in precision oncology. Recent years have seen an increasing use of external validation for such assessment. However, little is known about how it is affected by ubiquitous unwanted variations in test data because of disparate experimental handling and by the use of data normalization for alleviating such variations. METHODS In this paper, we studied these issues using two microarray data sets for the same set of tumor samples and additional data simulated by resampling under various levels of signal-to-noise ratio and different designs for array-to-sample allocation. RESULTS We showed that (1) unwanted variations can lead to biased classifier assessment and (2) data normalization mitigates the bias to varying extents depending on the specific method used. In particular, frozen normalization methods for test data outperform their conventional forms in terms of both reducing the bias in accuracy estimation and increasing robustness to handling effects. We make available our benchmarking tool as an R package on GitHub for performing such evaluation on additional methods for normalization and classification. CONCLUSION Our findings thus highlight the importance of proper test-data normalization for valid assessment by external validation and call for caution on the choice of normalization method for molecular classifier development.


F1000Research ◽  
2016 ◽  
Vol 4 ◽  
pp. 1521 ◽  
Author(s):  
Charlotte Soneson ◽  
Michael I. Love ◽  
Mark D. Robinson

High-throughput sequencing of cDNA (RNA-seq) is used extensively to characterize the transcriptome of cells. Many transcriptomic studies aim at comparing either abundance levels or the transcriptome composition between given conditions, and as a first step, the sequencing reads must be used as the basis for abundance quantification of transcriptomic features of interest, such as genes or transcripts. Various quantification approaches have been proposed, ranging from simple counting of reads that overlap given genomic regions to more complex estimation of underlying transcript abundances. In this paper, we show that gene-level abundance estimates and statistical inference offer advantages over transcript-level analyses, in terms of performance and interpretability. We also illustrate that the presence of differential isoform usage can lead to inflated false discovery rates in differential gene expression analyses on simple count matrices but that this can be addressed by incorporating offsets derived from transcript-level abundance estimates. We also show that the problem is relatively minor in several real data sets. Finally, we provide an R package (tximport) to help users integrate transcript-level abundance estimates from common quantification pipelines into count-based statistical inference engines.


2021 ◽  
Author(s):  
Renato Augusto Ferreira Lima ◽  
Andrea Sanchez-Tapia ◽  
Sara R. Mortara ◽  
Hans Steege ◽  
Marinez F. Siqueira

Species records from biological collections are becoming increasingly available online. This unprecedented availability of records has largely supported recent studies in taxonomy, bio-geography, macro-ecology, and biodiversity conservation. Biological collections vary in their documentation and notation standards, which have changed through time. For different reasons, neither collections nor data repositories perform the editing, formatting and standardization of the data, leaving these tasks to the final users of the species records (e.g. taxonomists, ecologists and conservationists). These tasks are challenging, particularly when working with millions of records from hundreds of biological collections. To help collection curators and final users to perform those tasks, we introduce plantR an open-source package that provides a comprehensive toolbox to manage species records from biological collections. The package is accompanied by the proposal of a reproducible workflow to manage this type of data in taxonomy, ecology and biodiversity conservation. It is implemented in R and designed to handle relatively large data sets as fast as possible. Initially designed to handle plant species records, many of the plantR features also apply to other groups of organisms, given that the data structure is similar. The plantR workflow includes tools to (1) download records from different data repositories, (2) standardize typical fields associated with species records, (3) validate the locality, geographical coordinates, taxonomic nomenclature and species identifications, including the retrieval of duplicates across collections, and (4) summarize and export records, including the construction of species checklists with vouchers. Other R packages provide tools to tackle some of the workflow steps described above. But in addition to the new features and resources related to the data editing and validation, the greatest strength of plantR is to provide a comprehensive and user-friendly workflow in one single environment, performing all tasks from data retrieval to export. Thus, plantR can help researchers to better assess data quality and avoid data leakage in a wide variety of studies using species records.


2021 ◽  
Vol 3 (2) ◽  
Author(s):  
Robin H van der Weide ◽  
Teun van den Brand ◽  
Judith H I Haarhuis ◽  
Hans Teunissen ◽  
Benjamin D Rowland ◽  
...  

Abstract Conformation capture-approaches like Hi-C can elucidate chromosome structure at a genome-wide scale. Hi-C datasets are large and require specialised software. Here, we present GENOVA: a user-friendly software package to analyse and visualise chromosome conformation capture (3C) data. GENOVA is an R-package that includes the most common Hi-C analyses, such as compartment and insulation score analysis. It can create annotated heatmaps to visualise the contact frequency at a specific locus and aggregate Hi-C signal over user-specified genomic regions such as ChIP-seq data. Finally, our package supports output from the major mapping-pipelines. We demonstrate the capabilities of GENOVA by analysing Hi-C data from HAP1 cell lines in which the cohesin-subunits SA1 and SA2 were knocked out. We find that ΔSA1 cells gain intra-TAD interactions and increase compartmentalisation. ΔSA2 cells have longer loops and a less compartmentalised genome. These results suggest that cohesinSA1 forms longer loops, while cohesinSA2 plays a role in forming and maintaining intra-TAD interactions. Our data supports the model that the genome is provided structure in 3D by the counter-balancing of loop formation on one hand, and compartmentalization on the other hand. By differentially controlling loops, cohesinSA1 and cohesinSA2 therefore also affect nuclear compartmentalization. We show that GENOVA is an easy to use R-package, that allows researchers to explore Hi-C data in great detail.


2019 ◽  
Vol 47 (18) ◽  
pp. e111-e111 ◽  
Author(s):  
Changlin Wan ◽  
Wennan Chang ◽  
Yu Zhang ◽  
Fenil Shah ◽  
Xiaoyu Lu ◽  
...  

Abstract A key challenge in modeling single-cell RNA-seq data is to capture the diversity of gene expression states regulated by different transcriptional regulatory inputs across individual cells, which is further complicated by largely observed zero and low expressions. We developed a left truncated mixture Gaussian (LTMG) model, from the kinetic relationships of the transcriptional regulatory inputs, mRNA metabolism and abundance in single cells. LTMG infers the expression multi-modalities across single cells, meanwhile, the dropouts and low expressions are treated as left truncated. We demonstrated that LTMG has significantly better goodness of fitting on an extensive number of scRNA-seq data, comparing to three other state-of-the-art models. Our biological assumption of the low non-zero expressions, rationality of the multimodality setting, and the capability of LTMG in extracting expression states specific to cell types or functions, are validated on independent experimental data sets. A differential gene expression test and a co-regulation module identification method are further developed. We experimentally validated that our differential expression test has higher sensitivity and specificity, compared with other five popular methods. The co-regulation analysis is capable of retrieving gene co-regulation modules corresponding to perturbed transcriptional regulations. A user-friendly R package with all the analysis power is available at https://github.com/zy26/LTMGSCA.


Author(s):  
Belen Jimenez Mena ◽  
Hugo Flávio ◽  
Romina Henriques ◽  
Alice Manuzzi ◽  
Miguel Ramos ◽  
...  

Targeted sequencing is an increasingly popular Next Generation Sequencing (NGS) approach for studying populations, through focusing sequencing efforts on specific parts of the genome of a species of interest. Methodologies and tools for designing targeted baits are scarce but in high demand. Here, we present specific guidelines and considerations for designing capture sequencing experiments for population genetics for both neutral genomic regions and regions subject to selection. We describe the bait design process for three diverse fish species: Atlantic salmon, Atlantic cod and tiger shark, which was carried out in our research group, and provide an evaluation of the performance of our approach across both historical and modern samples. The workflow used for designing these three bait sets has been implemented in the R-package supeRbaits, which encompass our considerations and guidelines for bait design to benefit researchers and practitioners. The supeRbaits R package is user‐friendly and versatile. It is written in C++ and implemented in R. supeRbaits and its manual are available from Github: https://github.com/BelenJM/supeRbaits


2016 ◽  
Author(s):  
Mathieu Gautier ◽  
Alexander Klassmann ◽  
Renaud Vitalis

AbstractIdentifying genomic regions with unusually high local haplotype homozygosity represents a powerful strategy to characterize candidate genes responding to natural or artificial positive selection. To that end, statistics measuring the extent of haplotype homozygosity within (e.g., EHH, iHS) and between (Rsb or XP-EHH) populations have been proposed in the literature. The rehh package for R was previously developed to facilitate genome-wide scans of selection, based on the analysis of long-range haplotypes. However, its performance wasn’t sufficient to cope with the growing size of available data sets. Here we propose a major upgrade of the rehh package, which includes an improved processing of the input files, a faster algorithm to enumerate haplotypes, as well as multi-threading. As illustrated with the analysis of large human haplotype data sets, these improvements decrease the computation time by more than an order of magnitude. This new version of rehh will thus allow performing iHS-, Rsb- or XP-EHH-based scans on large data sets. The package rehh 2.0 is available from the CRAN repository (http://cran.r-project.org/web/packages/rehh/index.html) together with help files and a detailed manual.


Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. V21-V29 ◽  
Author(s):  
Ismael Vera Rodriguez ◽  
David Bonar ◽  
Mauricio Sacchi

Noise contamination is an important problem in microseismic data processing, due to the low magnitude of the seismic events induced during fluid injection. In this study, a noncoherent noise attenuation technique based on a constrained time-frequency transform is presented. When applied to 1C data, the transform corresponds to a sparse representation of the microseismic signal in terms of a dictionary of complex Ricker wavelets. The use of complex wavelets possesses the advantage that signals with arbitrary phase can be represented with enhanced sparsity. A synthetic example illustrates the superior performance of the sparse constraint for denoising objectives when compared to the standard least-squares regularization. As the arrival time and frequency content of any wavefront are equivalent in the three components of a single receiver, the extension of the sparse transform to 3C data is accomplished when the three components are considered to share the same sparsity pattern in the time-frequency plane. Application of the 3C sparse transform to synthetic and real microseismic data sets demonstrate the advantages of this technique when the denoised results are compared against the original and low-pass filtered version of the noisy data. Furthermore, a comparison of hodograms between original, low-pass, and denoised traces shows that the denoising process preserves the phase and relative amplitude information present in the input data. The benefits of the 3C transform are highlighted particularly in cases where the wave arrivals are measured in the three components of a receiver but are only visible in two components due to the prevailing signal-to-noise ratio.


Sign in / Sign up

Export Citation Format

Share Document