scholarly journals A MiRNA Target Prediction Model based on Distributed Representation Learning and Deep Learning

Author(s):  
Yuzhuo Sun ◽  
Fei Xiong ◽  
Yongke Sun ◽  
Youjie Zhao ◽  
Yong Cao

Abstract Background: MicroRNAs (miRNAs) are a kind of non-coding RNA, which plays an essential role in gene regulation by binding to messenger RNAs(mRNAs). Accurate and rapid identification of miRNA target genes is helpful to reveal the mechanism of transcriptome regulation, which is of great significance for the study of cancer and other diseases. Many bioinformatics methods have been proposed to solve this problem, but the previous research did not further study the encoding of the base sequence. Results: In this study, we developed a novel method combining word embedding and deep learning for human miRNA targets at the site level prediction, which is inspired by the similarity between natural language and biological sequences. First, the wor2vec model was used to mine the distribution representation of miRNAs and mRNAs. Then, the data features are fully extracted automatically from temporal and spatial via the stacked Bidirectional Long short-term memory(BiLSTM) network. We compare the effects of different embedding methods on model accuracy in different deep learning models, and the results prove that using word2vec can improve the accuracy of deep learning models. In addition, we performed visual analysis on the distributed represented sequences and found hidden similarity relationships between bases. Finally, compared with different advanced methods and data sets, the results show that our proposed method has gotten better performance in multiple evaluation aspects. Conclusions: We present a novel method for predicting miRNA target sites consisting of word2vec and the BiLSTM model and demonstrate that this method can realize automatic feature extraction and has higher accuracy. Furthermore, we process miRNA and mRNA as two languages for the first time and explore their biological significance through visual analysis.

2021 ◽  
Vol 11 (13) ◽  
pp. 5853
Author(s):  
Hyesook Son ◽  
Seokyeon Kim ◽  
Hanbyul Yeon ◽  
Yejin Kim ◽  
Yun Jang ◽  
...  

The output of a deep-learning model delivers different predictions depending on the input of the deep learning model. In particular, the input characteristics might affect the output of a deep learning model. When predicting data that are measured with sensors in multiple locations, it is necessary to train a deep learning model with spatiotemporal characteristics of the data. Additionally, since not all of the data measured together result in increasing the accuracy of the deep learning model, we need to utilize the correlation characteristics between the data features. However, it is difficult to interpret the deep learning output, depending on the input characteristics. Therefore, it is necessary to analyze how the input characteristics affect prediction results to interpret deep learning models. In this paper, we propose a visualization system to analyze deep learning models with air pollution data. The proposed system visualizes the predictions according to the input characteristics. The input characteristics include space-time and data features, and we apply temporal prediction networks, including gated recurrent units (GRU), long short term memory (LSTM), and spatiotemporal prediction networks (convolutional LSTM) as deep learning models. We interpret the output according to the characteristics of input to show the effectiveness of the system.


Author(s):  
S. Arokiaraj ◽  
Dr. N. Viswanathan

With the advent of Internet of things(IoT),HA (HA) recognition has contributed the more application in health care in terms of diagnosis and Clinical process. These devices must be aware of human movements to provide better aid in the clinical applications as well as user’s daily activity.Also , In addition to machine and deep learning algorithms, HA recognition systems has significantly improved in terms of high accurate recognition. However, the most of the existing models designed needs improvisation in terms of accuracy and computational overhead. In this research paper, we proposed a BAT optimized Long Short term Memory (BAT-LSTM) for an effective recognition of human activities using real time IoT systems. The data are collected by implanting the Internet of things) devices invasively. Then, proposed BAT-LSTM is deployed to extract the temporal features which are then used for classification to HA. Nearly 10,0000 dataset were collected and used for evaluating the proposed model. For the validation of proposed framework, accuracy, precision, recall, specificity and F1-score parameters are chosen and comparison is done with the other state-of-art deep learning models. The finding shows the proposed model outperforms the other learning models and finds its suitability for the HA recognition.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Juhong Namgung ◽  
Siwoon Son ◽  
Yang-Sae Moon

In recent years, cyberattacks using command and control (C&C) servers have significantly increased. To hide their C&C servers, attackers often use a domain generation algorithm (DGA), which automatically generates domain names for the C&C servers. Accordingly, extensive research on DGA domain detection has been conducted. However, existing methods cannot accurately detect continuously generated DGA domains and can easily be evaded by an attacker. Recently, long short-term memory- (LSTM-) based deep learning models have been introduced to detect DGA domains in real time using only domain names without feature extraction or additional information. In this paper, we propose an efficient DGA domain detection method based on bidirectional LSTM (BiLSTM), which learns bidirectional information as opposed to unidirectional information learned by LSTM. We further maximize the detection performance with a convolutional neural network (CNN) + BiLSTM ensemble model using Attention mechanism, which allows the model to learn both local and global information in a domain sequence. Experimental results show that existing CNN and LSTM models achieved F1-scores of 0.9384 and 0.9597, respectively, while the proposed BiLSTM and ensemble models achieved higher F1-scores of 0.9618 and 0.9666, respectively. In addition, the ensemble model achieved the best performance for most DGA domain classes, enabling more accurate DGA domain detection than existing models.


2021 ◽  
Vol 7 ◽  
pp. e795
Author(s):  
Pooja Vinayak Kamat ◽  
Rekha Sugandhi ◽  
Satish Kumar

Remaining Useful Life (RUL) estimation of rotating machinery based on their degradation data is vital for machine supervisors. Deep learning models are effective and popular methods for forecasting when rotating machinery such as bearings may malfunction and ultimately break down. During healthy functioning of the machinery, however, RUL is ill-defined. To address this issue, this study recommends using anomaly monitoring during both RUL estimator training and operation. Essential time-domain data is extracted from the raw bearing vibration data, and deep learning models are used to detect the onset of the anomaly. This further acts as a trigger for data-driven RUL estimation. The study employs an unsupervised clustering approach for anomaly trend analysis and a semi-supervised method for anomaly detection and RUL estimation. The novel combined deep learning-based anomaly-onset aware RUL estimation framework showed enhanced results on the benchmarked PRONOSTIA bearings dataset under non-varying operating conditions. The framework consisting of Autoencoder and Long Short Term Memory variants achieved an accuracy of over 90% in anomaly detection and RUL prediction. In the future, the framework can be deployed under varying operational situations using the transfer learning approach.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_4) ◽  
Author(s):  
ChienYu Chi ◽  
Yen-Pin Chen ◽  
Adrian Winkler ◽  
Kuan-Chun Fu ◽  
Fie Xu ◽  
...  

Introduction: Predicting rare catastrophic events is challenging due to lack of targets. Here we employed a multi-task learning method and demonstrated that substantial gains in accuracy and generalizability was achieved by sharing representations between related tasks Methods: Starting from Taiwan National Health Insurance Research Database, we selected adult people (>20 year) experienced in-hospital cardiac arrest but not out-of-hospital cardiac arrest during 8 years (2003-2010), and built a dataset using de-identified claims of Emergency Department (ED) and hospitalization. Final dataset had 169,287 patients, randomly split into 3 sections, train 70%, validation 15%, and test 15%.Two outcomes, 30-day readmission and 30-day mortality are chosen. We constructed the deep learning system in two steps. We first used a taxonomy mapping system Text2Node to generate a distributed representation for each concept. We then applied a multilevel hierarchical model based on long short-term memory (LSTM) architecture. Multi-task models used gradient similarity to prioritize the desired task over auxiliary tasks. Single-task models were trained for each desired task. All models share the same architecture and are trained with the same input data Results: Each model was optimized to maximize AUROC on the validation set with the final metrics calculated on the held-out test set. We demonstrated multi-task deep learning models outperform single task deep learning models on both tasks. While readmission had roughly 30% positives and showed miniscule improvements, the mortality task saw more improvement between models. We hypothesize that this is a result of the data imbalance, mortality occurred roughly 5% positive; the auxiliary tasks help the model interpret the data and generalize better. Conclusion: Multi-task deep learning models outperform single task deep learning models in predicting 30-day readmission and mortality in in-hospital cardiac arrest patients.


Viruses ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 769 ◽  
Author(s):  
Ahmed Sedik ◽  
Abdullah M Iliyasu ◽  
Basma Abd El-Rahiem ◽  
Mohammed E. Abdel Samea ◽  
Asmaa Abdel-Raheem ◽  
...  

This generation faces existential threats because of the global assault of the novel Corona virus 2019 (i.e., COVID-19). With more than thirteen million infected and nearly 600000 fatalities in 188 countries/regions, COVID-19 is the worst calamity since the World War II. These misfortunes are traced to various reasons, including late detection of latent or asymptomatic carriers, migration, and inadequate isolation of infected people. This makes detection, containment, and mitigation global priorities to contain exposure via quarantine, lockdowns, work/stay at home, and social distancing that are focused on “flattening the curve”. While medical and healthcare givers are at the frontline in the battle against COVID-19, it is a crusade for all of humanity. Meanwhile, machine and deep learning models have been revolutionary across numerous domains and applications whose potency have been exploited to birth numerous state-of-the-art technologies utilised in disease detection, diagnoses, and treatment. Despite these potentials, machine and, particularly, deep learning models are data sensitive, because their effectiveness depends on availability and reliability of data. The unavailability of such data hinders efforts of engineers and computer scientists to fully contribute to the ongoing assault against COVID-19. Faced with a calamity on one side and absence of reliable data on the other, this study presents two data-augmentation models to enhance learnability of the Convolutional Neural Network (CNN) and the Convolutional Long Short-Term Memory (ConvLSTM)-based deep learning models (DADLMs) and, by doing so, boost the accuracy of COVID-19 detection. Experimental results reveal improvement in terms of accuracy of detection, logarithmic loss, and testing time relative to DLMs devoid of such data augmentation. Furthermore, average increases of 4% to 11% in COVID-19 detection accuracy are reported in favour of the proposed data-augmented deep learning models relative to the machine learning techniques. Therefore, the proposed algorithm is effective in performing a rapid and consistent Corona virus diagnosis that is primarily aimed at assisting clinicians in making accurate identification of the virus.


2018 ◽  
Vol 7 (3.27) ◽  
pp. 258 ◽  
Author(s):  
Yecheng Yao ◽  
Jungho Yi ◽  
Shengjun Zhai ◽  
Yuwen Lin ◽  
Taekseung Kim ◽  
...  

The decentralization of cryptocurrencies has greatly reduced the level of central control over them, impacting international relations and trade. Further, wide fluctuations in cryptocurrency price indicate an urgent need for an accurate way to forecast this price. This paper proposes a novel method to predict cryptocurrency price by considering various factors such as market cap, volume, circulating supply, and maximum supply based on deep learning techniques such as the recurrent neural network (RNN) and the long short-term memory (LSTM),which are effective learning models for training data, with the LSTM being better at recognizing longer-term associations. The proposed approach is implemented in Python and validated for benchmark datasets. The results verify the applicability of the proposed approach for the accurate prediction of cryptocurrency price.


2021 ◽  
Author(s):  
Qihang Wang ◽  
Feng Liu ◽  
Guihong Wan ◽  
Ying Chen

AbstractMonitoring the depth of unconsciousness during anesthesia is useful in both clinical settings and neuroscience investigations to understand brain mechanisms. Electroencephalogram (EEG) has been used as an objective means of characterizing brain altered arousal and/or cognition states induced by anesthetics in real-time. Different general anesthetics affect cerebral electrical activities in different ways. However, the performance of conventional machine learning models on EEG data is unsatisfactory due to the low Signal to Noise Ratio (SNR) in the EEG signals, especially in the office-based anesthesia EEG setting. Deep learning models have been used widely in the field of Brain Computer Interface (BCI) to perform classification and pattern recognition tasks due to their capability of good generalization and handling noises. Compared to other BCI applications, where deep learning has demonstrated encouraging results, the deep learning approach for classifying different brain consciousness states under anesthesia has been much less investigated. In this paper, we propose a new framework based on meta-learning using deep neural networks, named Anes-MetaNet, to classify brain states under anesthetics. The Anes-MetaNet is composed of Convolutional Neural Networks (CNN) to extract power spectrum features, and a time consequence model based on Long Short-Term Memory (LSTM) Networks to capture the temporal dependencies, and a meta-learning framework to handle large cross-subject variability. We used a multi-stage training paradigm to improve the performance, which is justified by visualizing the high-level feature mapping. Experiments on the office-based anesthesia EEG dataset demonstrate the effectiveness of our proposed Anes-MetaNet by comparison of existing methods.


Author(s):  
Ali Bou Nassif ◽  
Abdollah Masoud Darya ◽  
Ashraf Elnagar

This work presents a detailed comparison of the performance of deep learning models such as convolutional neural networks, long short-term memory, gated recurrent units, their hybrids, and a selection of shallow learning classifiers for sentiment analysis of Arabic reviews. Additionally, the comparison includes state-of-the-art models such as the transformer architecture and the araBERT pre-trained model. The datasets used in this study are multi-dialect Arabic hotel and book review datasets, which are some of the largest publicly available datasets for Arabic reviews. Results showed deep learning outperforming shallow learning for binary and multi-label classification, in contrast with the results of similar work reported in the literature. This discrepancy in outcome was caused by dataset size as we found it to be proportional to the performance of deep learning models. The performance of deep and shallow learning techniques was analyzed in terms of accuracy and F1 score. The best performing shallow learning technique was Random Forest followed by Decision Tree, and AdaBoost. The deep learning models performed similarly using a default embedding layer, while the transformer model performed best when augmented with araBERT.


Sign in / Sign up

Export Citation Format

Share Document