high throughput data analysis
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 13)

H-INDEX

8
(FIVE YEARS 2)

Gene ◽  
2021 ◽  
pp. 146111
Author(s):  
Erfan Sharifi ◽  
Niusha Khazaei ◽  
Nicholas W. Kieran ◽  
Sahel Jahangiri Esfahani ◽  
Abdulshakour Mohammadnia ◽  
...  

2021 ◽  
Author(s):  
Zuguang Gu ◽  
Daniel Huebschmann

Consensus partitioning is an unsupervised method widely used in high throughput data analysis for revealing subgroups and assigns stability for the classification. However, standard consensus partitioning procedures are weak to identify large numbers of stable subgroups. There are two main issues. 1. Subgroups with small differences are difficult to separate if they are simultaneously detected with subgroups with large differences. And 2. stability of classification generally decreases as the number of subgroups increases. In this work, we proposed a new strategy to solve these two issues by applying consensus partitionings in a hierarchical procedure. We demonstrated hierarchical consensus partitioning can be efficient to reveal more subgroups. We also tested the performance of hierarchical consensus partitioning on revealing a great number of subgroups with a DNA methylation dataset. The hierarchical consensus partitioning is implemented in the R package cola with comprehensive functionality for analysis and visualizations. It can also automate the analysis only with a minimum of two lines of code, which generates a detailed HTML report containing the complete analysis.


Metabolites ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 456
Author(s):  
Pejman Salahshouri ◽  
Modjtaba Emadi-Baygi ◽  
Mahdi Jalili ◽  
Faiz M. Khan ◽  
Olaf Wolkenhauer ◽  
...  

The human gut microbiota plays a dual key role in maintaining human health or inducing disorders, for example, obesity, type 2 diabetes, and cancers such as colorectal cancer (CRC). High-throughput data analysis, such as metagenomics and metabolomics, have shown the diverse effects of alterations in dynamic bacterial populations on the initiation and progression of colorectal cancer. However, it is well established that microbiome and human cells constantly influence each other, so it is not appropriate to study them independently. Genome-scale metabolic modeling is a well-established mathematical framework that describes the dynamic behavior of these two axes at the system level. In this study, we created community microbiome models of three conditions during colorectal cancer progression, including carcinoma, adenoma and health status, and showed how changes in the microbial population influence intestinal secretions. Conclusively, our findings showed that alterations in the gut microbiome might provoke mutations and transform adenomas into carcinomas. These alterations include the secretion of mutagenic metabolites such as H2S, NO compounds, spermidine and TMA, as well as the reduction of butyrate. Furthermore, we found that the colorectal cancer microbiome can promote inflammation, cancer progression (e.g., angiogenesis) and cancer prevention (e.g., apoptosis) by increasing and decreasing certain metabolites such as histamine, glutamine and pyruvate. Thus, modulating the gut microbiome could be a promising strategy for the prevention and treatment of CRC.


2020 ◽  
Author(s):  
Alexandru Al. Ecovoiu ◽  
Iulian Cristian Ghita ◽  
David Ioan Mihail Chifiriuc ◽  
Iulian Constantin Ghionoiu ◽  
Andrei Mihai Ciuca ◽  
...  

AbstractTransposon annotation is a very dynamic field of genomics and various tools assigned to support this bioinformatics endeavor were reported. Genome ARTIST (GA) software was initially developed for mapping artificial transposons mobilized during insertional mutagenesis projects. Now, the new functions of GA_v2 qualify it as an effective companion for mapping and annotation of class II natural transposons in assembled genomes, contigs or sequencing reads.Tabular export of mapping and annotation data for subsequent high-throughput data analysis, the export of a list of flanking sequences around either the coordinates of insertion or around the target site duplications (TSDs) and generation of a consensus sequence for the respective flanking sequences are all key assets of GA_v2.Additionally, we developed two accompanying short scripts that enable the user to annotate transposons existent in assembled genomes and to use various annotation offered by FlyBase for Drosophila melanogaster genome.Herein, we present the applicability of GA_v2 for a preliminary annotation of the class II transposon P-element in the genome of D. melanogaster strain Horezu, Romania, which was sequenced with Nanopore technology in our laboratory. Our results point that GA_v2 is a reliable tool to be integrated in pipelines designed to perform transposon annotation in new sequenced genomes.GA_v2 is open source software compatible with Ubuntu, Mac OS and Windows and is available at https://github.com/genomeartist/genomeartist and at www.genomeartist.ro.


Medicine ◽  
2020 ◽  
Vol 99 (20) ◽  
pp. e20340
Author(s):  
Wei Zhou ◽  
Jiarui Wu ◽  
Xinkui Liu ◽  
Mengwei Ni ◽  
Ziqi Meng ◽  
...  

2020 ◽  
Author(s):  
Erfan Sharifi ◽  
Niusha Khazaei ◽  
Nicholas Kieran ◽  
Sahel Jahangiri Esfahani ◽  
Abdulshakour Mohammadnia ◽  
...  

2019 ◽  
Author(s):  
David C. Handler ◽  
Paul A. Haynes

AbstractThe multiple testing problem is a well-known statistical stumbling block in high-throughput data analysis, where large scale repetition of statistical methods introduces unwanted noise into the results. While approaches exist to overcome the multiple testing problem, these methods focus on theoretical statistical clarification rather than incorporating experimentally-derived measures to ensure appropriately tailored analysis parameters. Here, we introduce a method for estimating inter-replicate variability in reference samples for a quantitative proteomics experiment using permutation analysis. This can function as a modulator to multiple testing corrections such as the Benjamini-Hochberg ordered Q value test. We refer to this as a ‘same-same’ analysis, since this method incorporates the use of six biological replicates of the reference sample and determines, through non-redundant triplet pairwise comparisons, the level of quantitative noise inherent within the system. The method can be used to produce an experiment-specific Q value cut-off that achieves a specified false discovery rate at the quantitation level, such as 1%. The same-same method is applicable to any experimental set that incorporates six replicates of a reference sample. To facilitate access to this approach, we have developed a same-same analysis R module that is freely available and ready to use via the internet.


Toxics ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 41 ◽  
Author(s):  
Jingchuan Xue ◽  
Yunjia Lai ◽  
Chih-Wei Liu ◽  
Hongyu Ru

The proposal of the “exposome” concept represents a shift of the research paradigm in studying exposure-disease relationships from an isolated and partial way to a systematic and agnostic approach. Nevertheless, exposome implementation is facing a variety of challenges including measurement techniques and data analysis. Here we focus on the chemical exposome, which refers to the mixtures of chemical pollutants people are exposed to from embryo onwards. We review the current chemical exposome measurement approaches with a focus on those based on the mass spectrometry. We further explore the strategies in implementing the concept of chemical exposome and discuss the available chemical exposome studies. Early progresses in the chemical exposome research are outlined, and major challenges are highlighted. In conclusion, efforts towards chemical exposome have only uncovered the tip of the iceberg, and further advancement in measurement techniques, computational tools, high-throughput data analysis, and standardization may allow more exciting discoveries concerning the role of exposome in human health and disease.


Sign in / Sign up

Export Citation Format

Share Document