scholarly journals Spatial Meta-transcriptomes of human and murine intestines

2021 ◽  
Author(s):  
Lin Lyu ◽  
Ru Feng ◽  
Xue Li ◽  
Xiaofei Yu ◽  
GuoQiang Chen ◽  
...  

We developed an analysis pipeline that can extract microbial sequences from Spatial Transcriptomic data and assign taxonomic labels to them, generating a spatial microbial abundance matrix in addition to the default host expression one, enabling simultaneous analysis of host expression and microbial distribution. We applied it on both human and murine intestinal datasets and validated the spatial microbial abundance information with alternative assays. Finally, we present a few biological insights that can be gained from this novel data. In summary, this proof of concept work demonstrated the feasibility of Spatial Meta-transcriptomic analysis, and pave the way for future experimental optimization.

2019 ◽  
Vol 20 (3) ◽  
pp. 720
Author(s):  
Nídia de Sousa ◽  
Gustavo Rodriguez-Esteban ◽  
Ivan Colagè ◽  
Paolo D’Ambrosio ◽  
Jack van Loon ◽  
...  

The possibility of humans to live outside of Earth on another planet has attracted the attention of numerous scientists around the world. One of the greatest difficulties is that humans cannot live in an extra-Earth environment without proper equipment. In addition, the consequences of chronic gravity alterations in human body are not known. Here, we used planarians as a model system to test how gravity fluctuations could affect complex organisms. Planarians are an ideal system, since they can regenerate any missing part and they are continuously renewing their tissues. We performed a transcriptomic analysis of animals submitted to simulated microgravity (Random Positioning Machine, RPM) (s-µg) and hypergravity (8 g), and we observed that the transcriptional levels of several genes are affected. Surprisingly, we found the major differences in the s-µg group. The results obtained in the transcriptomic analysis were validated, demonstrating that our transcriptomic data is reliable. We also found that, in a sensitive environment, as under Hippo signaling silencing, gravity fluctuations potentiate the increase in cell proliferation. Our data revealed that changes in gravity severely affect genetic transcription and that these alterations potentiate molecular disorders that could promote the development of multiple diseases such as cancer.


2020 ◽  
Author(s):  
Timothy J. Hackmann

AbstractMicrobes can metabolize more chemical compounds than any other group of organisms. As a result, their metabolism is of interest to investigators across biology. Despite the interest, information on metabolism of specific microbes is hard to access. Information is buried in text of books and journals, and investigators have no easy way to extract it out. Here we investigate if neural networks can extract out this information and predict metabolic traits. For proof of concept, we predicted two traits: whether microbes carry one type of metabolism (fermentation) or produce one metabolite (acetate). We collected written descriptions of 7,021 species of bacteria and archaea from Bergey’s Manual. We read the descriptions and manually identified (labeled) which species were fermentative or produced acetate. We then trained neural networks to predict these labels. In total, we identified 2,364 species as fermentative, and 1,009 species as also producing acetate. Neural networks could predict which species were fermentative with 97.3% accuracy. Accuracy was even higher (98.6%) when predicting species also producing acetate. We used these predictions to draw phylogenetic trees of species with these traits. The resulting trees were close to the actual trees (drawn using labels). Previous counts of fermentative species are 4-fold lower than our own. For acetate-producing species, they are 100-fold lower. This undercounting confirms past difficulty in extracting metabolic traits from text. Our approach with neural networks can extract information efficiently and accurately. It paves the way for putting more metabolic traits into databases, providing easy access of information by investigators.


2017 ◽  
Author(s):  
Derrick J. Thrasher ◽  
Bronwyn G. Butcher ◽  
Leonardo Campagna ◽  
Michael S. Webster ◽  
Irby J. Lovette

AbstractInformation on genetic relationships among individuals is essential to many studies of the behavior and ecology of wild organisms. Parentage and relatedness assays based on large numbers of SNP loci hold substantial advantages over the microsatellite markers traditionally used for these purposes. We present a double-digest restriction site-associated DNA sequencing (ddRAD-seq) analysis pipeline that, as such, simultaneously achieves the SNP discovery and genotyping steps and which is optimized to return a statistically powerful set of SNP markers (typically 150-600 after stringent filtering) from large numbers of individuals (up to 240 per run). We explore the tradeoffs inherent in this approach through a set of experiments in a species with a complex social system, the variegated fairy-wren (Malurus lamberti), and further validate it in a phylogenetically broad set of other bird species. Through direct comparisons with a parallel dataset from a robust panel of highly variable microsatellite markers, we show that this ddRAD-seq approach results in substantially improved power to discriminate among potential relatives and considerably more precise estimates of relatedness coefficients. The pipeline is designed to be universally applicable to all bird species (and with minor modifications to many other taxa), to be cost- and time-efficient, and to be replicable across independent runs such that genotype data from different study periods can be combined and analyzed as field samples are accumulated.


2011 ◽  
Vol 2 (1) ◽  
pp. 171-197
Author(s):  
Andrew Gargett

We propose a novel dual processing model of linguistic routinisation, specifically formulaic ex- pressions (from relatively fixed idioms, all the way through to looser collocational phenomena). This model is formalised using the Dynamic Syntax (DS) formal account of language processing, whereby we make a specific extension to the core DS lexical architecture to capture the dynamics of linguistic routinisation. This extension is inspired by work within cognitive science more broadly. DS has a range of attractive modelling features, such as full incrementality, as well as recent ac- counts of using resources of the core grammar for modelling a range of dialogue phenomena, all of which we deploy in our account. This leads to not only a fully incremental model of formulaic lan- guage, but further, this straightforwardly extends to routinised dialogue phenomena. We consider this approach to be a proof of concept of how interdisciplinary work within cognitive science holds out the promise of meeting challenges faced by modellers of dialogue and discourse.


2019 ◽  
Author(s):  
Jinbing Bai ◽  
Ileen Jhaney ◽  
Jessica Wells

BACKGROUND Cloud computing for microbiome data sets can significantly increase working efficiencies and expedite the translation of research findings into clinical practice. The Amazon Web Services (AWS) cloud provides an invaluable option for microbiome data storage, computation, and analysis. OBJECTIVE The goals of this study were to develop a microbiome data analysis pipeline by using AWS cloud and to conduct a proof-of-concept test for microbiome data storage, processing, and analysis. METHODS A multidisciplinary team was formed to develop and test a reproducible microbiome data analysis pipeline with multiple AWS cloud services that could be used for storage, computation, and data analysis. The microbiome data analysis pipeline developed in AWS was tested by using two data sets: 19 vaginal microbiome samples and 50 gut microbiome samples. RESULTS Using AWS features, we developed a microbiome data analysis pipeline that included Amazon Simple Storage Service for microbiome sequence storage, Linux Elastic Compute Cloud (EC2) instances (ie, servers) for data computation and analysis, and security keys to create and manage the use of encryption for the pipeline. Bioinformatics and statistical tools (ie, Quantitative Insights Into Microbial Ecology 2 and RStudio) were installed within the Linux EC2 instances to run microbiome statistical analysis. The microbiome data analysis pipeline was performed through command-line interfaces within the Linux operating system or in the Mac operating system. Using this new pipeline, we were able to successfully process and analyze 50 gut microbiome samples within 4 hours at a very low cost (a c4.4xlarge EC2 instance costs $0.80 per hour). Gut microbiome findings regarding diversity, taxonomy, and abundance analyses were easily shared within our research team. CONCLUSIONS Building a microbiome data analysis pipeline with AWS cloud is feasible. This pipeline is highly reliable, computationally powerful, and cost effective. Our AWS-based microbiome analysis pipeline provides an efficient tool to conduct microbiome data analysis.


Author(s):  
Victoria Tidman

The landmark paper discussed in this chapter is ‘Epidural morphine in treatment of pain’, published by Behar et al. in 1979. A small case series in the seventies first highlighted the use of epidural morphine for pain. It consists of only ten patients who were all administered 2 mg of morphine epidurally. Patients suffering from both acute and chronic pain had a significant reduction in the level of pain within 2–3 minutes, and this lasted 6–24 hours. The authors went on to postulate that morphine produced its effect by a direct action on the specific opioid receptors in the substantia gelatinosa. Although morphine is rarely used epidurally, this paper paved the way for the use of epidural opioids in many different pain conditions.


RSC Advances ◽  
2015 ◽  
Vol 5 (100) ◽  
pp. 82169-82178 ◽  
Author(s):  
Schaack Béatrice ◽  
Liu Wei ◽  
Thiéry Alain ◽  
Auger Aurélien ◽  
Hochepied Jean-François ◽  
...  

This paper highlights the way in which eukaryotic cell and bacteria based biochips are relevant for nanotoxicological risk assessment.


2017 ◽  
Vol 23 (3) ◽  
pp. 351-373
Author(s):  
David Buckingham ◽  
Josh Bongard

In some evolutionary robotics experiments, evolved robots are transferred from simulation to reality, while sensor/motor data flows back from reality to improve the next transferral. We envision a generalization of this approach: a simulation-to-reality pipeline. In this pipeline, increasingly embodied agents flow up through a sequence of increasingly physically realistic simulators, while data flows back down to improve the next transferral between neighboring simulators; physical reality is the last link in this chain. As a first proof of concept, we introduce a two-link chain: A fast yet low-fidelity ( lo-fi) simulator hosts minimally embodied agents, which gradually evolve controllers and morphologies to colonize a slow yet high-fidelity ( hi-fi) simulator. The agents are thus physically scaffolded. We show here that, given the same computational budget, these physically scaffolded robots reach higher performance in the hi-fi simulator than do robots that only evolve in the hi-fi simulator, but only for a sufficiently difficult task. These results suggest that a simulation-to-reality pipeline may strike a good balance between accelerating evolution in simulation while anchoring the results in reality, free the investigator from having to prespecify the robot's morphology, and pave the way to scalable, automated, robot-generating systems.


Sign in / Sign up

Export Citation Format

Share Document