59 Insights into Complex Traits from Human Genetics

2021 ◽  
Vol 99 (Supplement_3) ◽  
pp. 30-31
Author(s):  
Kathryn Kemper

Abstract Genomic selection has been implemented successfully in many livestock industries for genetic improvement. However, genomic selection provides limited insight into the genetic mechanisms underlying variation in complex traits. In contrast, human genetics has a focus on understanding genetic architecture and the origins of quantitative trait variation. This presentation will discuss a number of examples from human genetics which can inform our understanding of the nature of variation in complex traits. So-called ‘monogenic’ conditions, for example, are proving to have more complex genetic architecture than naïve expectations might suggest. Massive data sets of millions of people are also enabling longstanding questions to be addressed. Traits such as height, for example, are affected by a very large but finite number of loci. We can reconcile seemingly disparate heritability estimates from different experimental designs by accounting for assortative mating. The presentation will provide a brief update on current approaches to genomic prediction in human genetics and discuss the implications of these findings for understanding and predicting complex traits in livestock.

Author(s):  
Ruth Johnson ◽  
Kathryn S. Burch ◽  
Kangcheng Hou ◽  
Mario Paciuc ◽  
Bogdan Pasaniuc ◽  
...  

AbstractA key question in human genetics is understanding the proportion of SNPs modulating a particular phenotype or the proportion of susceptibility SNPs for a disease, termed polygenicity. Previous studies have observed that complex traits tend to be highly polygenic, opposing the previous belief that only a handful of SNPs contribute to a trait. Beyond these genome-wide estimates, the distribution of polygenicity across genomic regions as well as the genomic factors that affect regional polygenicity remain poorly understood. A reason for this gap is that methods for estimating polygenicity utilize SNP effect sizes from GWAS. However, estimating regional polygenicity from GWAS effect sizes involves untangling the correlation between SNPs due to LD, leading to intractable computations for even a small number of SNPs. In this work, we propose a scalable method, BEAVR, to estimate the regional polygenicity of a trait given marginal effect sizes from GWAS and LD information. We implement a Gibbs sampler to estimate the posterior distribution of the regional polygenicity and derive a fast, algorithmic update to circumvent the computational bottlenecks associated with LD. The runtime of our algorithm is 𝒪(MK) for M SNPs and K susceptibility SNPs, where the number of susceptibility SNPs is typically K ≪ M. By modeling the full LD structure, we show that BEAVR provides unbiased estimates of polygenicity compared to previous methods that only partially model LD. Finally, we show how estimates of regional polygenicity for BMI, eczema, and high cholesterol provide insight into the regional genetic architecture of each trait.


Author(s):  
A Salman Avestimehr ◽  
Seyed Mohammadreza Mousavi Kalan ◽  
Mahdi Soltanolkotabi

Abstract Dealing with the shear size and complexity of today’s massive data sets requires computational platforms that can analyze data in a parallelized and distributed fashion. A major bottleneck that arises in such modern distributed computing environments is that some of the worker nodes may run slow. These nodes a.k.a. stragglers can significantly slow down computation as the slowest node may dictate the overall computational time. A recent computational framework, called encoded optimization, creates redundancy in the data to mitigate the effect of stragglers. In this paper, we develop novel mathematical understanding for this framework demonstrating its effectiveness in much broader settings than was previously understood. We also analyze the convergence behavior of iterative encoded optimization algorithms, allowing us to characterize fundamental trade-offs between convergence rate, size of data set, accuracy, computational load (or data redundancy) and straggler toleration in this framework.


2022 ◽  
pp. 41-67
Author(s):  
Vo Ngoc Phu ◽  
Vo Thi Ngoc Tran

Machine learning (ML), neural network (NN), evolutionary algorithm (EA), fuzzy systems (FSs), as well as computer science have been very famous and very significant for many years. They have been applied to many different areas. They have contributed much to developments of many large-scale corporations, massive organizations, etc. Lots of information and massive data sets (MDSs) have been generated from these big corporations, organizations, etc. These big data sets (BDSs) have been the challenges of many commercial applications, researches, etc. Therefore, there have been many algorithms of the ML, the NN, the EA, the FSs, as well as computer science which have been developed to handle these massive data sets successfully. To support for this process, the authors have displayed all the possible algorithms of the NN for the large-scale data sets (LSDSs) successfully in this chapter. Finally, they have presented a novel model of the NN for the BDS in a sequential environment (SE) and a distributed network environment (DNE).


Author(s):  
Joseph L. Breault

The National Academy of Sciences convened in 1995 for a conference on massive data sets. The presentation on health care noted that “massive applies in several dimensions . . . the data themselves are massive, both in terms of the number of observations and also in terms of the variables . . . there are tens of thousands of indicator variables coded for each patient” (Goodall, 1995, paragraph 18). We multiply this by the number of patients in the United States, which is hundreds of millions.


2020 ◽  
Vol 10 (6) ◽  
pp. 1343-1358
Author(s):  
Ernesto Iadanza ◽  
Rachele Fabbri ◽  
Džana Bašić-ČiČak ◽  
Amedeo Amedei ◽  
Jasminka Hasic Telalovic

Abstract This article aims to provide a thorough overview of the use of Artificial Intelligence (AI) techniques in studying the gut microbiota and its role in the diagnosis and treatment of some important diseases. The association between microbiota and diseases, together with its clinical relevance, is still difficult to interpret. The advances in AI techniques, such as Machine Learning (ML) and Deep Learning (DL), can help clinicians in processing and interpreting these massive data sets. Two research groups have been involved in this Scoping Review, working in two different areas of Europe: Florence and Sarajevo. The papers included in the review describe the use of ML or DL methods applied to the study of human gut microbiota. In total, 1109 papers were considered in this study. After elimination, a final set of 16 articles was considered in the scoping review. Different AI techniques were applied in the reviewed papers. Some papers applied ML, while others applied DL techniques. 11 papers evaluated just different ML algorithms (ranging from one to eight algorithms applied to one dataset). The remaining five papers examined both ML and DL algorithms. The most applied ML algorithm was Random Forest and it also exhibited the best performances.


2017 ◽  
Vol 35 (11) ◽  
pp. 1026-1028 ◽  
Author(s):  
Martin Steinegger ◽  
Johannes Söding

Sign in / Sign up

Export Citation Format

Share Document