scholarly journals NeuRank: learning to rank with neural networks for drug–target interaction prediction

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Xiujin Wu ◽  
Wenhua Zeng ◽  
Fan Lin ◽  
Xiuze Zhou

Abstract Background Experimental verification of a drug discovery process is expensive and time-consuming. Therefore, recently, the demand to more efficiently and effectively identify drug–target interactions (DTIs) has intensified. Results We treat the prediction of DTIs as a ranking problem and propose a neural network architecture, NeuRank, to address it. Also, we assume that similar drug compounds are likely to interact with similar target proteins. Thus, in our model, we add drug and target similarities, which are very effective at improving the prediction of DTIs. Then, we develop NeuRank from a point-wise to a pair-wise, and further to list-wise model. Conclusion Finally, results from extensive experiments on five public data sets (DrugBank, Enzymes, Ion Channels, G-Protein-Coupled Receptors, and Nuclear Receptors) show that, in identifying DTIs, our models achieve better performance than other state-of-the-art methods.

2020 ◽  
Author(s):  
Ming Chen ◽  
Xiuze Zhou

Abstract Background: Because it is so laborious and expensive to experimentally identify Drug-Target Interactions (DTIs), only a few DTIs have been verified. Computational methods are useful for identifying DTIs in biological studies of drug discovery and development. Results: For drug-target interaction prediction, we propose a novel neural network architecture, DAEi, extended from Denoising AutoEncoder (DAE). We assume that a set of verified DTIs is a corrupted version of the full interaction set. We use DAEi to learn latent features from corrupted DTIs to reconstruct the full input. Also, to better predict DTIs, we add some similarities to DAEi and adopt a new nonlinear method for calculation. Similarity information is very effective at improving the prediction of DTIs. Conclusion: Results of the extensive experiments we conducted on four real data sets show that our proposed methods are superior to other baseline approaches.Availability: All codes in this paper are open-sourced, and our projects are available at: https://github.com/XiuzeZhou/DAEi.


2020 ◽  
Vol 34 (04) ◽  
pp. 6331-6339
Author(s):  
Zhuo Wang ◽  
Wei Zhang ◽  
Ning LIU ◽  
Jianyong Wang

Models with transparent inner structure and high classification performance are required to reduce potential risk and provide trust for users in domains like health care, finance, security, etc. However, existing models are hard to simultaneously satisfy the above two properties. In this paper, we propose a new hierarchical rule-based model for classification tasks, named Concept Rule Sets (CRS), which has both a strong expressive ability and a transparent inner structure. To address the challenge of efficiently learning the non-differentiable CRS model, we propose a novel neural network architecture, Multilayer Logical Perceptron (MLLP), which is a continuous version of CRS. Using MLLP and the Random Binarization (RB) method we proposed, we can search the discrete solution of CRS in continuous space using gradient descent and ensure the discrete CRS acts almost the same as the corresponding continuous MLLP. Experiments on 12 public data sets show that CRS outperforms the state-of-the-art approaches and the complexity of the learned CRS is close to the simple decision tree.


2019 ◽  
Vol 53 (1) ◽  
pp. 2-19 ◽  
Author(s):  
Erion Çano ◽  
Maurizio Morisio

Purpose The fabulous results of convolution neural networks in image-related tasks attracted attention of text mining, sentiment analysis and other text analysis researchers. It is, however, difficult to find enough data for feeding such networks, optimize their parameters, and make the right design choices when constructing network architectures. The purpose of this paper is to present the creation steps of two big data sets of song emotions. The authors also explore usage of convolution and max-pooling neural layers on song lyrics, product and movie review text data sets. Three variants of a simple and flexible neural network architecture are also compared. Design/methodology/approach The intention was to spot any important patterns that can serve as guidelines for parameter optimization of similar models. The authors also wanted to identify architecture design choices which lead to high performing sentiment analysis models. To this end, the authors conducted a series of experiments with neural architectures of various configurations. Findings The results indicate that parallel convolutions of filter lengths up to 3 are usually enough for capturing relevant text features. Also, max-pooling region size should be adapted to the length of text documents for producing the best feature maps. Originality/value Top results the authors got are obtained with feature maps of lengths 6–18. An improvement on future neural network models for sentiment analysis could be generating sentiment polarity prediction of documents using aggregation of predictions on smaller excerpt of the entire text.


2016 ◽  
Vol 32 (12) ◽  
pp. i18-i27 ◽  
Author(s):  
Qingjun Yuan ◽  
Junning Gao ◽  
Dongliang Wu ◽  
Shihua Zhang ◽  
Hiroshi Mamitsuka ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Yihua Ye ◽  
Yuqi Wen ◽  
Zhongnan Zhang ◽  
Song He ◽  
Xiaochen Bo

The prediction of drug-target interaction (DTI) is a key step in drug repositioning. In recent years, many studies have tried to use matrix factorization to predict DTI, but they only use known DTIs and ignore the features of drug and target expression profiles, resulting in limited prediction performance. In this study, we propose a new DTI prediction model named AdvB-DTI. Within this model, the features of drug and target expression profiles are associated with Adversarial Bayesian Personalized Ranking through matrix factorization. Firstly, according to the known drug-target relationships, a set of ternary partial order relationships is generated. Next, these partial order relationships are used to train the latent factor matrix of drugs and targets using the Adversarial Bayesian Personalized Ranking method, and the matrix factorization is improved by the features of drug and target expression profiles. Finally, the scores of drug-target pairs are achieved by the inner product of latent factors, and the DTI prediction is performed based on the score ranking. The proposed model effectively takes advantage of the idea of learning to rank to overcome the problem of data sparsity, and perturbation factors are introduced to make the model more robust. Experimental results show that our model could achieve a better DTI prediction performance.


2020 ◽  
Vol 21 (S8) ◽  
Author(s):  
Domenico Amato ◽  
Giosue’ Lo Bosco ◽  
Riccardo Rizzo

Abstract Background Nucleosomes wrap the DNA into the nucleus of the Eukaryote cell and regulate its transcription phase. Several studies indicate that nucleosomes are determined by the combined effects of several factors, including DNA sequence organization. Interestingly, the identification of nucleosomes on a genomic scale has been successfully performed by computational methods using DNA sequence as input data. Results In this work, we propose CORENup, a deep learning model for nucleosome identification. CORENup processes a DNA sequence as input using one-hot representation and combines in a parallel fashion a fully convolutional neural network and a recurrent layer. These two parallel levels are devoted to catching both non periodic and periodic DNA string features. A dense layer is devoted to their combination to give a final classification. Conclusions Results computed on public data sets of different organisms show that CORENup is a state of the art methodology for nucleosome positioning identification based on a Deep Neural Network architecture. The comparisons have been carried out using two groups of datasets, currently adopted by the best performing methods, and CORENup has shown top performance both in terms of classification metrics and elapsed computation time.


2019 ◽  
Vol 7 ◽  
pp. 421-436 ◽  
Author(s):  
Ion Madrazo Azpiazu ◽  
Maria Soledad Pera

We present a multiattentive recurrent neural network architecture for automatic multilingual readability assessment. This architecture considers raw words as its main input, but internally captures text structure and informs its word attention process using other syntax- and morphology-related datapoints, known to be of great importance to readability. This is achieved by a multiattentive strategy that allows the neural network to focus on specific parts of a text for predicting its reading level. We conducted an exhaustive evaluation using data sets targeting multiple languages and prediction task types, to compare the proposed model with traditional, state-of-the-art, and other neural network strategies.


Author(s):  
MICHAEL J. WATTS

A method for extracting Zadeh–Mamdani fuzzy rules from a minimalist constructive neural network model is described. The network contains no embedded fuzzy logic elements. The rule extraction algorithm needs no modification of the neural network architecture. No modification of the network learning algorithm is required, nor is it necessary to retain any training examples. The algorithm is illustrated on two well known benchmark data sets and compared with a relevant existing rule extraction algorithm.


2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Seow Wen Jun ◽  
Arif Ahmed Sekh ◽  
Chai Quek ◽  
Dilip K. Prasad

AbstractThere is a growing interest in automatic crafting of neural network architectures as opposed to expert tuning to find the best architecture. On the other hand, the problem of stock trading is considered one of the most dynamic systems that heavily depends on complex trends of the individual company. This paper proposes a novel self-evolving neural network system called self-evolving Multi-Layer Perceptron (seMLP) which can abstract the data and produce an optimum neural network architecture without expert tuning. seMLP incorporates the human cognitive ability of concept abstraction into the architecture of the neural network. Genetic algorithm (GA) is used to determine the best neural network architecture that is capable of knowledge abstraction of the data. After determining the architecture of the neural network with the minimum width, seMLP prunes the network to remove the redundant neurons in the network, thus decreasing the density of the network and achieving conciseness. seMLP is evaluated on three stock market data sets. The optimized models obtained from seMLP are compared and benchmarked against state-of-the-art methods. The results show that seMLP can automatically choose best performing models.


Sign in / Sign up

Export Citation Format

Share Document