scholarly journals An evolutionary recent IFN-IL-6-CEBP axis is linked to monocyte expansion and tuberculosis severity in humans

2019 ◽  
Author(s):  
Murilo Delgobo ◽  
Daniel A. G. B. Mendes ◽  
Edgar Kozlova ◽  
Edroaldo Lummertz Rocha ◽  
Gabriela F. Rodrigues-Luiz ◽  
...  

AbstractMonocyte counts are increased during human tuberculosis (TB) but it has not been determined whetherMycobacterium tuberculosis(Mtb) directly regulates myeloid commitment. We demonstrated that exposure toMtbdirects primary human CD34+cells to differentiate into monocytes/macrophages. In vitro myeloid conversion did not require type I or type II IFN signaling. In contrast,Mtbenhanced IL-6 responses by CD34+cell cultures and IL-6R neutralization inhibited myeloid differentiation and decreased mycobacterial growth in vitro. Integrated systems biology analysis of transcriptomic, proteomic and genomic data of large data sets of healthy controls and TB patients established the existence of a myeloidIL-6/IL6R/CEBPgene module associated with disease severity. Furthermore, genetic and functional analysis revealed theIL6/IL6R/CEBPgene module has undergone recent evolutionary selection, including Neanderthal introgression and human pathogen adaptation, connected to systemic monocyte counts. These results suggestMtbco-opts an evolutionary recent IFN-IL6-CEBP feed-forward loop, increasing myeloid differentiation linked to severe TB in humans.

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Murilo Delgobo ◽  
Daniel AGB Mendes ◽  
Edgar Kozlova ◽  
Edroaldo Lummertz Rocha ◽  
Gabriela F Rodrigues-Luiz ◽  
...  

Monocyte counts are increased during human tuberculosis (TB) but it has not been determined whether Mycobacterium tuberculosis (Mtb) directly regulates myeloid commitment. We demonstrated that exposure to Mtb directs primary human CD34+ cells to differentiate into monocytes/macrophages. In vitro myeloid conversion did not require type I or type II IFN signaling. In contrast, Mtb enhanced IL-6 responses by CD34+ cell cultures and IL-6R neutralization inhibited myeloid differentiation and decreased mycobacterial growth in vitro. Integrated systems biology analysis of transcriptomic, proteomic and genomic data of large data sets of healthy controls and TB patients established the existence of a myeloid IL-6/IL6R/CEBP gene module associated with disease severity. Furthermore, genetic and functional analysis revealed the IL6/IL6R/CEBP gene module has undergone recent evolutionary selection, including Neanderthal introgression and human pathogen adaptation, connected to systemic monocyte counts. These results suggest Mtb co-opts an evolutionary recent IFN-IL6-CEBP feed-forward loop, increasing myeloid differentiation linked to severe TB in humans.


2020 ◽  
Author(s):  
Adam Pond ◽  
Seongwon Hwang ◽  
Berta Verd ◽  
Benjamin Steventon

AbstractMachine learning approaches are becoming increasingly widespread and are now present in most areas of research. Their recent surge can be explained in part due to our ability to generate and store enormous amounts of data with which to train these models. The requirement for large training sets is also responsible for limiting further potential applications of machine learning, particularly in fields where data tend to be scarce such as developmental biology. However, recent research seems to indicate that machine learning and Big Data can sometimes be decoupled to train models with modest amounts of data. In this work we set out to train a CNN-based classifier to stage zebrafish tail buds at four different stages of development using small information-rich data sets. Our results show that two and three dimensional convolutional neural networks can be trained to stage developing zebrafish tail buds based on both morphological and gene expression confocal microscopy images, achieving in each case up to 100% test accuracy scores. Importantly, we show that high accuracy can be achieved with data set sizes of under 100 images, much smaller than the typical training set size for a convolutional neural net. Furthermore, our classifier shows that it is possible to stage isolated embryonic structures without the need to refer to classic developmental landmarks in the whole embryo, which will be particularly useful to stage 3D culture in vitro systems such as organoids. We hope that this work will provide a proof of principle that will help dispel the myth that large data set sizes are always required to train CNNs, and encourage researchers in fields where data are scarce to also apply ML approaches.Author summaryThe application of machine learning approaches currently hinges on the availability of large data sets to train the models with. However, recent research has shown that large data sets might not always be required. In this work we set out to see whether we could use small confocal microscopy image data sets to train a convolutional neural network (CNN) to stage zebrafish tail buds at four different stages in their development. We found that high test accuracies can be achieved with data set sizes of under 100 images, much smaller than the typical training set size for a CNN. This work also shows that we can robustly stage the embryonic development of isolated structures, without the need to refer back to landmarks in the tail bud. This constitutes an important methodological advance for staging organoids and other 3D culture in vitro systems. This work proves that prohibitively large data sets are not always required to train CNNs, and we hope will encourage others to apply the power of machine learning to their areas of study even if data are scarce.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
David J. M. Lewis ◽  
Mark P. Lythgoe

Advances in “omics” technology (transcriptomics, proteomics, metabolomics, genomics/epigenomics, etc.) allied with statistical and bioinformatics tools are providing insights into basic mechanisms of vaccine and adjuvant efficacy or inflammation/reactogenicity. Predictive biomarkers of relatively frequent inflammatory reactogenicity may be identified in systems vaccinology studies involving tens or hundreds of participants and used to screen new vaccines and adjuvants inin vitro,ex vivo, animal, or human models. The identification of rare events (such as those observed with initial rotavirus vaccine or suspected autoimmune complications) will require interrogation of large data sets and population-based research before application of systems vaccinology. The Innovative Medicine Initiative funded public-private project BIOVACSAFE is an initial attempt to systematically identify biomarkers of relatively common inflammatory events after adjuvanted immunization using human, animal, and population-based models. Discriminatory profiles or biomarkers are being identified, which require validation in large trials involving thousands of participants before they can be generalized. Ultimately, it is to be hoped that the knowledge gained from such initiatives will provide tools to the industry, academia, and regulators to select optimal noninflammatory but immunogenic and effective vaccine adjuvant combinations, thereby shortening product development cycles and identifying unsuitable vaccine candidates that would fail in expensive late stage development or postmarketing.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


2018 ◽  
Vol 2018 (6) ◽  
pp. 38-39
Author(s):  
Austa Parker ◽  
Yan Qu ◽  
David Hokanson ◽  
Jeff Soller ◽  
Eric Dickenson ◽  
...  

Computers ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 47
Author(s):  
Fariha Iffath ◽  
A. S. M. Kayes ◽  
Md. Tahsin Rahman ◽  
Jannatul Ferdows ◽  
Mohammad Shamsul Arefin ◽  
...  

A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging procedure of the programs that are submitted by the users. Online judges are systems designed for the reliable evaluation of the source codes submitted by the users. Traditional online judging platforms are not ideally suitable for programming labs, as they do not support partial scoring and efficient detection of plagiarized codes. When considering this fact, in this paper, we present an online judging framework that is capable of automatic scoring of codes by detecting plagiarized contents and the level of accuracy of codes efficiently. Our system performs the detection of plagiarism by detecting fingerprints of programs and using the fingerprints to compare them instead of using the whole file. We used winnowing to select fingerprints among k-gram hash values of a source code, which was generated by the Rabin–Karp Algorithm. The proposed system is compared with the existing online judging platforms to show the superiority in terms of time efficiency, correctness, and feature availability. In addition, we evaluated our system by using large data sets and comparing the run time with MOSS, which is the widely used plagiarism detection technique.


2021 ◽  
Author(s):  
Věra Kůrková ◽  
Marcello Sanguineti
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document