scholarly journals Identification of essential regulatory elements in the human genome

2018 ◽  
Author(s):  
Alex Wells ◽  
David Heckerman ◽  
Ali Torkamani ◽  
Li Yin ◽  
Bing Ren ◽  
...  

The identification of essential regulatory elements is central to the understanding of the consequences of genetic variation. Here we use novel genomic data and machine learning techniques to map essential regulatory elements and to guide functional validation. We train an XGBoost model using 38 functional and structural features, including genome essentiality metrics, 3D genome organization and enhancer reporter STARR-seq data to differentiate between pathogenic and control non-coding genetic variants. We validate the accuracy of prediction by using data from tiling-deletion-based and CRISPR interference screens of activity of cis-regulatory elements. In neurodevelopmental disorders, the model (ncER, non-coding Essential Regulation) maps essential genomic segments within deletions and rearranged topologically associated domains linked to human disease. We show that the approach successfully identifies essential regulatory elements in the human genome.

2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Alex Wells ◽  
David Heckerman ◽  
Ali Torkamani ◽  
Li Yin ◽  
Jonathan Sebat ◽  
...  

AbstractA gene is considered essential if loss of function results in loss of viability, fitness or in disease. This concept is well established for coding genes; however, non-coding regions are thought less likely to be determinants of critical functions. Here we train a machine learning model using functional, mutational and structural features, including new genome essentiality metrics, 3D genome organization and enhancer reporter data to identify deleterious variants in non-coding regions. We assess the model for functional correlates by using data from tiling-deletion-based and CRISPR interference screens of activity of cis-regulatory elements in over 3 Mb of genome sequence. Finally, we explore two user cases that involve indels and the disruption of enhancers associated with a developmental disease. We rank variants in the non-coding genome according to their predicted deleteriousness. The model prioritizes non-coding regions associated with regulation of important genes and with cell viability, an in vitro surrogate of essentiality.


2020 ◽  
Vol 36 (Supplement_2) ◽  
pp. i692-i699
Author(s):  
Marleen M. Nieboer ◽  
Jeroen de Ridder

Abstract Motivation Despite the fact that structural variants (SVs) play an important role in cancer, methods to predict their effect, especially for SVs in non-coding regions, are lacking, leaving them often overlooked in the clinic. Non-coding SVs may disrupt the boundaries of Topologically Associated Domains (TADs), thereby affecting interactions between genes and regulatory elements such as enhancers. However, it is not known when such alterations are pathogenic. Although machine learning techniques are a promising solution to answer this question, representing the large number of interactions that an SV can disrupt in a single feature matrix is not trivial. Results We introduce svMIL: a method to predict pathogenic TAD boundary-disrupting SV effects based on multiple instance learning, which circumvents the need for a traditional feature matrix by grouping SVs into bags that can contain any number of disruptions. We demonstrate that svMIL can predict SV pathogenicity, measured through same-sample gene expression aberration, for various cancer types. In addition, our approach reveals that somatic pathogenic SVs alter different regulatory interactions than somatic non-pathogenic SVs and germline SVs. Availability and implementation All code for svMIL is publicly available on GitHub: https://github.com/UMCUGenetics/svMIL. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Todor D. Ganchev

In this chapter we review various computational models of locally recurrent neurons and deliberate the architecture of some archetypal locally recurrent neural networks (LRNNs) that are based on them. Generalizations of these structures are discussed as well. Furthermore, we point at a number of realworld applications of LRNNs that have been reported in past and recent publications. These applications involve classification or prediction of temporal sequences, discovering and modeling of spatial and temporal correlations, process identification and control, etc. Validation experiments reported in these developments provide evidence that locally recurrent architectures are capable of identifying and exploiting temporal and spatial correlations (i.e., the context in which events occur), which is the main reason for their advantageous performance when compared with the one of their non-recurrent counterparts or other reasonable machine learning techniques.


Author(s):  
Bhavani Thuraisingham

Data mining is the process of posing queries to large quantities of data and extracting information often previously unknown using mathematical, statistical, and machine-learning techniques. Data mining has many applications in a number of areas, including marketing and sales, medicine, law, manufacturing, and, more recently, homeland security. Using data mining, one can uncover hidden dependencies between terrorist groups as well as possibly predict terrorist events based on past experience. One particular data-mining technique that is being investigated a great deal for homeland security is link analysis, where links are drawn between various nodes, possibly detecting some hidden links.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
R. J. Shalloo ◽  
S. J. D. Dann ◽  
J.-N. Gruse ◽  
C. I. D. Underwood ◽  
A. F. Antoine ◽  
...  

AbstractLaser wakefield accelerators promise to revolutionize many areas of accelerator science. However, one of the greatest challenges to their widespread adoption is the difficulty in control and optimization of the accelerator outputs due to coupling between input parameters and the dynamic evolution of the accelerating structure. Here, we use machine learning techniques to automate a 100 MeV-scale accelerator, which optimized its outputs by simultaneously varying up to six parameters including the spectral and spatial phase of the laser and the plasma density and length. Most notably, the model built by the algorithm enabled optimization of the laser evolution that might otherwise have been missed in single-variable scans. Subtle tuning of the laser pulse shape caused an 80% increase in electron beam charge, despite the pulse length changing by just 1%.


Author(s):  
Jonathan Becker ◽  
Aveek Purohit ◽  
Zheng Sun

USARSim group at NIST developed a simulated robot that operated in the Unreal Tournament 3 (UT3) gaming environment. They used a software PID controller to control the robot in UT3 worlds. Unfortunately, the PID controller did not work well, so NIST asked us to develop a better controller using machine learning techniques. In the process, we characterized the software PID controller and the robot’s behavior in UT3 worlds. Using data collected from our simulations, we compared different machine learning techniques including linear regression and reinforcement learning (RL). Finally, we implemented a RL based controller in Matlab and ran it in the UT3 environment via a TCP/IP link between Matlab and UT3.


2018 ◽  
Vol 7 (4) ◽  
pp. 2738
Author(s):  
P. Srinivas Rao ◽  
Jayadev Gyani ◽  
G. Narsimha

In online social network’s phony account detection is one of the major task among the ability of genuine user from forged user account. The fundamental objective of detection of phony account framework is to detect fake account and removal technique in Social network user sites. This work concentrates on detection of phony account in which it depends on normal basis framework, transformative Algorithms and fuzzy technique. Initially, the most essential attributes including personal attributes, comparability techniques and various real user review, tweets, or comments are extricated. A direct blend of these attributes demonstrates the significance of each reviews tweets comments etc. To compute closeness measure, a consolidated strategy in view of artificial honey bee state Algorithm and fuzzy technique are utilized. Second approach is proposed to alter the best weights of the normal user attributes utilizing the social network activities/transaction and inherited Algorithm. Finally, a normal rank rationale framework is utilized to calculate the final scoring of normal user activities. The decision making of proposed approach to find phony account are variation with existing techniques user behavioral analysis using data sets and machine learning techniques such as crowdflower_sample and genuine_accounts_sample dataset of facebook and Twitter. The outcomes demonstrate that proposed strategy overcomes the previously mentioned strategies. 


Sign in / Sign up

Export Citation Format

Share Document