scholarly journals Evaluating Representations for Gene Ontology Terms

2019 ◽  
Author(s):  
Dat Duong ◽  
Ankith Uppunda ◽  
Lisa Gai ◽  
Chelsea Ju ◽  
James Zhang ◽  
...  

AbstractProtein functions can be described by the Gene Ontology (GO) terms, allowing us to compare the functions of two proteins by measuring the similarity of the terms assigned to them. Recent works have applied neural network models to derive the vector representations for GO terms and compute similarity scores for these terms by comparing their vector embeddings. There are two typical ways to embed GO terms into vectors; a model can either embed the definitions of the terms or the topology of the terms in the ontology. In this paper, we design three tasks to critically evaluate the GO embeddings of two recent neural network models, and further introduce additional models for embedding GO terms, adapted from three popular neural network frameworks: Graph Convolution Network (GCN), Embeddings from Language Models (ELMo), and Bidirectional Encoder Representations from Transformers (BERT), which have not yet been explored in previous works. Task 1 studies edge cases where the GO embeddings may not provide meaningful similarity scores for GO terms. We find that all neural network based methods fail to produce high similarity scores for related terms when these terms have low Information Content values. Task 2 is a canonical task which estimates how well GO embeddings can compare functions of two orthologous genes or two interacting proteins. The best neural network methods for this task are those that embed GO terms using their definitions, and the differences among such methods are small. Task 3 evaluates how GO embeddings affect the performance of GO annotation methods, which predict whether a protein should be labeled by certain GO terms. When the annotation datasets contain many samples for each GO label, GO embeddings do not improve the classification accuracy. Machine learning GO annotation methods often remove rare GO labels from the training datasets so that the model parameters can be efficiently trained. We evaluate whether GO embeddings can improve prediction of rare labels unseen in the training datasets, and find that GO embeddings based on the BERT framework achieve the best results in this setting. We present our embedding methods and three evaluation tasks as the basis for future research on this topic.

Author(s):  
Joarder Kamruzzaman ◽  
Ruhul A. Sarker ◽  
Rezaul K. Begg

In today’s global market economy, currency exchange rates play a vital role in national economy of the trading nations. In this chapter, we present an overview of neural network-based forecasting models for foreign currency exchange (forex) rates. To demonstrate the suitability of neural network in forex forecasting, a case study on the forex rates of six different currencies against the Australian dollar is presented. We used three different learning algorithms in this case study, and a comparison based on several performance metrics and trading profitability is provided. Future research direction for enhancement of neural network models is also discussed.


2005 ◽  
Vol 11 (3) ◽  
pp. 301-328 ◽  
Author(s):  
Sen Cheong Kon ◽  
Lindsay W. Turner

In times of tourism uncertainty, practitioners need short-term forecasting methods. This study compares the forecasting accuracy of the basic structural method (BSM) and the neural network method to find the best structure for neural network models. Data for arrivals to Singapore are used to test the analysis while the naïve and Holt-Winters methods are used for base comparison of simpler models. The results confirm that the BSM remains a highly accurate method and that correctly structured neural models can outperform BSM and the simpler methods in the short term, and can also use short data series. These findings make neural methods significant candidates for future research.


Author(s):  
B. Sureshkumar ◽  
V. Vijayan ◽  
S. Dinesh ◽  
K. Rajaguru

Milling operation is one of the important manufacturing processes in production industry. Study and analysis of milling process parameters such as spindle speed, feed rate and depth of cut are important for process planning engineers. The responses are temperature, surface roughness, machining time, feed force, thrust force and cutting force. The main aim of this study is to find out the effects of these parameters in face milling operation on Monel k 400 work piece materials with tungsten carbide insert. The theoretical investigation is carried out with neural network modelling and the 3-1-6 structure neural network models are considered. Developed neural network models show best agreement with experimental values. For same type of operation, result of these experiments shall be useful for future research purpose.


2018 ◽  
Author(s):  
Simen Tennøe ◽  
Geir Halnes ◽  
Gaute T. Einevoll

AbstractComputational models in neuroscience typically contain many parameters that are poorly constrained by experimental data. Uncertainty quantification and sensitivity analysis provide rigorous procedures to quantify how the model output depends on this parameter uncertainty. Unfortunately, the application of such methods is not yet standard within the field of neuroscience.Here we present Uncertainpy, an open-source Python toolbox, tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models. Uncertainpy aims to make it easy and quick to get started with uncertainty analysis, without any need for detailed prior knowledge. The toolbox allows uncertainty quantification and sensitivity analysis to be performed on already existing models without needing to modify the model equations or model implementation. Uncertainpy bases its analysis on polynomial chaos expansions, which are more efficient than the more standard Monte-Carlo based approaches.Uncertainpy is tailored for neuroscience applications by its built-in capability for calculating characteristic features in the model output. The toolbox does not merely perform a point-to- point comparison of the “raw” model output (e.g. membrane voltage traces), but can also calculate the uncertainty and sensitivity of salient model response features such as spike timing, action potential width, mean interspike interval, and other features relevant for various neural and neural network models. Uncertainpy comes with several common models and features built in, and including custom models and new features is easy.The aim of the current paper is to present Uncertainpy for the neuroscience community in a user- oriented manner. To demonstrate its broad applicability, we perform an uncertainty quantification and sensitivity analysis on three case studies relevant for neuroscience: the original Hodgkin-Huxley point-neuron model for action potential generation, a multi-compartmental model of a thalamic interneuron implemented in the NEURON simulator, and a sparsely connected recurrent network model implemented in the NEST simulator.SIGNIFICANCE STATEMENTA major challenge in computational neuroscience is to specify the often large number of parameters that define the neuron and neural network models. Many of these parameters have an inherent variability, and some may even be actively regulated and change with time. It is important to know how the uncertainty in model parameters affects the model predictions. To address this need we here present Uncertainpy, an open-source Python toolbox tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models.


2020 ◽  
Author(s):  
Dat Duong ◽  
Lisa Gai ◽  
Ankith Uppunda ◽  
Don Le ◽  
Eleazar Eskin ◽  
...  

AbstractPredicting functions for novel amino acid sequences is a long-standing research problem. The Uniprot database which contains protein sequences annotated with Gene Ontology (GO) terms, is one commonly used training dataset for this problem. Predicting protein functions can then be viewed as a multi-label classification problem where the input is an amino acid sequence and the output is a set of GO terms. Recently, deep convolutional neural network (CNN) models have been introduced to annotate GO terms for protein sequences. However, the CNN architecture can only model close-range interactions between amino acids in a sequence. In this paper, first, we build a novel GO annotation model based on the Transformer neural network. Unlike the CNN architecture, the Transformer models all pairwise interactions for the amino acids within a sequence, and so can capture more relevant information from the sequences. Indeed, we show that our adaptation of Transformer yields higher classification accuracy when compared to the recent CNN-based method DeepGO. Second, we modify our model to take motifs in the protein sequences found by BLAST as additional input features. Our strategy is different from other ensemble approaches that average the outcomes of BLAST-based and machine learning predictors. Third, we integrate into our Transformer the metadata about the protein sequences such as 3D structure and protein-protein interaction (PPI) data. We show that such information can greatly improve the prediction accuracy, especially for rare GO labels.


2011 ◽  
Vol 42 (3) ◽  
pp. 533-543 ◽  
Author(s):  
N. Fani ◽  
E. B. Tone ◽  
J. Phifer ◽  
S. D. Norrholm ◽  
B. Bradley ◽  
...  

BackgroundPost-traumatic stress disorder (PTSD) develops in a minority of traumatized individuals. Attention biases to threat and abnormalities in fear learning and extinction are processes likely to play a critical role in the creation and/or maintenance of PTSD symptomatology. However, the relationship between these processes has not been established, particularly in highly traumatized populations; understanding their interaction can help inform neural network models and treatments for PTSD.MethodAttention biases were measured using a dot probe task modified for use with our population; task stimuli included photographs of angry facial expressions, which are emotionally salient threat signals. A fear-potentiated startle paradigm was employed to measure atypical physiological response during acquisition and extinction phases of fear learning. These measures were administered to a sample of 64 minority (largely African American), highly traumatized individuals with and without PTSD.ResultsParticipants with PTSD demonstrated attention biases toward threat; this attentional style was associated with exaggerated startle response during fear learning and early and middle phases of extinction, even after accounting for the effects of trauma exposure.ConclusionsOur findings indicate that an attentional bias toward threat is associated with abnormalities in ‘fear load’ in PTSD, providing seminal evidence for an interaction between these two processes. Future research combining these behavioral and psychophysiological techniques with neuroimaging will be useful toward addressing how one process may modulate the other and understanding whether these phenomena are manifestations of dysfunction within a shared neural network. Ultimately, this may serve to inform PTSD treatments specifically designed to correct these atypical processes.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 1042
Author(s):  
Lan Huang ◽  
Jia Zeng ◽  
Shiqi Sun ◽  
Wencong Wang ◽  
Yan Wang ◽  
...  

Deep neural networks may achieve excellent performance in many research fields. However, many deep neural network models are over-parameterized. The computation of weight matrices often consumes a lot of time, which requires plenty of computing resources. In order to solve these problems, a novel block-based division method and a special coarse-grained block pruning strategy are proposed in this paper to simplify and compress the fully connected structure, and the pruned weight matrices with a blocky structure are then stored in the format of Block Sparse Row (BSR) to accelerate the calculation of the weight matrices. First, the weight matrices are divided into square sub-blocks based on spatial aggregation. Second, a coarse-grained block pruning procedure is utilized to scale down the model parameters. Finally, the BSR storage format, which is much more friendly to block sparse matrix storage and computation, is employed to store these pruned dense weight blocks to speed up the calculation. In the following experiments on MNIST and Fashion-MNIST datasets, the trend of accuracies with different pruning granularities and different sparsity is explored in order to analyze our method. The experimental results show that our coarse-grained block pruning method can compress the network and can reduce the computational cost without greatly degrading the classification accuracy. The experiment on the CIFAR-10 dataset shows that our block pruning strategy can combine well with the convolutional networks.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Wei Gu ◽  
Ching-Chun Chang ◽  
Yu Bai ◽  
Yunyuan Fan ◽  
Liang Tao ◽  
...  

With the great achievements of deep learning technology, neural network models have emerged as a new type of intellectual property. Neural network models’ design and training require considerable computational resources and time. Watermarking is a potential solution for achieving copyright protection and integrity of neural network models without excessively compromising the models’ accuracy and stability. In this work, we develop a multipurpose watermarking method for securing the copyright and integrity of a steganographic autoencoder referred to as “HiDDen.” This autoencoder model is used to hide different kinds of watermark messages in digital images. Copyright information is embedded with imperceptibly modified model parameters, and integrity is verified by embedding the Hash value generated from the model parameters. Experimental results show that the proposed multipurpose watermarking method can reliably identify copyright ownership and localize tampered parts of the model parameters. Furthermore, the accuracy and robustness of the autoencoder model are perfectly preserved.


2021 ◽  
Vol 23 ◽  
pp. 484-492
Author(s):  
Vasyl Kalinchyk ◽  
Olexandr Meita ◽  
Vitalii Pobigaylo ◽  
Vitalii Kalinchyk ◽  
Danylo Filyanin

This research paper investigates the application of neural network models for forecasting in energy. The results of forecasting the weekly energy consumption of the enterprise according to the model of a multilayer perceptron at different values of neurons and training algorithms are given. The estimation and comparative analysis of models depending on model parameters is made.


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document