scholarly journals BO-LSTM: Classifying relations via long short-term memory networks along biomedical ontologies

2018 ◽  
Author(s):  
Andre Lamurias ◽  
Luka A. Clarke ◽  
Francisco M. Couto

AbstractRecent studies have proposed deep learning techniques, namely recurrent neural networks, to improve biomedical text mining tasks. However, these techniques rarely take advantage of existing domain-specific resources, such as ontologies. In Life and Health Sciences there is a vast and valuable set of such resources publicly available, which are continuously being updated. Biomedical ontologies are nowadays a mainstream approach to formalize existing knowledge about entities, such as genes, chemicals, phenotypes, and disorders. These resources contain supplementary information that may not be yet encoded in training data, particularly in domains with limited labeled data.We propose a new model, BO-LSTM, that takes advantage of domain-specific ontologies, by representing each entity as the sequence of its ancestors in the ontology. We implemented BO-LSTM as a recurrent neural network with long short-term memory units and using an open biomedical ontology, which in our case-study was Chemical Entities of Biological Interest (ChEBI). We assessed the performance of BO-LSTM on detecting and classifying drug-drug interactions in a publicly available corpus from an international challenge, composed of 792 drug descriptions and 233 scientific abstracts. By using the domain-specific ontology in addition to word embeddings and WordNet, BO-LSTM improved both the F1-score of the detection and classification of drug-drug interactions, particularly in a document set with a limited number of annotations. Our findings demonstrate that besides the high performance of current deep learning techniques, domain-specific ontologies can still be useful to mitigate the lack of labeled data.Author summaryA high quantity of biomedical information is only available in documents such as scientific articles and patents. Due to the rate at which new documents are produced, we need automatic methods to extract useful information from them. Text mining is a subfield of information retrieval which aims at extracting relevant information from text. Scientific literature is a challenge to text mining because of the complexity and specificity of the topics approached. In recent years, deep learning has obtained promising results in various text mining tasks by exploring large datasets. On the other hand, ontologies provide a detailed and sound representation of a domain and have been developed to diverse biomedical domains. We propose a model that combines deep learning algorithms with biomedical ontologies to identify relations between concepts in text. We demonstrate the potential of this model to extract drug-drug interactions from abstracts and drug descriptions. This model can be applied to other biomedical domains using an annotated corpus of documents and an ontology related to that domain to train a new classifier.

10.6036/10007 ◽  
2021 ◽  
Vol 96 (5) ◽  
pp. 528-533
Author(s):  
XAVIER LARRIVA NOVO ◽  
MARIO VEGA BARBAS ◽  
VICTOR VILLAGRA ◽  
JULIO BERROCAL

Cybersecurity has stood out in recent years with the aim of protecting information systems. Different methods, techniques and tools have been used to make the most of the existing vulnerabilities in these systems. Therefore, it is essential to develop and improve new technologies, as well as intrusion detection systems that allow detecting possible threats. However, the use of these technologies requires highly qualified cybersecurity personnel to analyze the results and reduce the large number of false positives that these technologies presents in their results. Therefore, this generates the need to research and develop new high-performance cybersecurity systems that allow efficient analysis and resolution of these results. This research presents the application of machine learning techniques to classify real traffic, in order to identify possible attacks. The study has been carried out using machine learning tools applying deep learning algorithms such as multi-layer perceptron and long-short-term-memory. Additionally, this document presents a comparison between the results obtained by applying the aforementioned algorithms and algorithms that are not deep learning, such as: random forest and decision tree. Finally, the results obtained are presented, showing that the long-short-term-memory algorithm is the one that provides the best results in relation to precision and logarithmic loss.


2021 ◽  
Vol 4 (1) ◽  
pp. 121-128
Author(s):  
A Iorliam ◽  
S Agber ◽  
MP Dzungwe ◽  
DK Kwaghtyo ◽  
S Bum

Social media provides opportunities for individuals to anonymously communicate and express hateful feelings and opinions at the comfort of their rooms. This anonymity has become a shield for many individuals or groups who use social media to express deep hatred for other individuals or groups, tribes or race, religion, gender, as well as belief systems. In this study, a comparative analysis is performed using Long Short-Term Memory and Convolutional Neural Network deep learning techniques for Hate Speech classification. This analysis demonstrates that the Long Short-Term Memory classifier achieved an accuracy of 92.47%, while the Convolutional Neural Network classifier achieved an accuracy of 92.74%. These results showed that deep learning techniques can effectively classify hate speech from normal speech.


Author(s):  
Thang

In this research, we propose a method of human robot interactive intention prediction. The proposed algorithm makes use of a OpenPose library and a Long-short term memory deep learning neural network. The neural network observes the human posture in a time series, then predicts the human interactive intention. We train the deep neural network using dataset generated by us. The experimental results show that, our proposed method is able to predict the human robot interactive intention, providing 92% the accuracy on the testing set.


2021 ◽  
Author(s):  
Usha Devi G ◽  
Priyan M K ◽  
Gokulnath Chandra Babu ◽  
Gayathri Karthick

Abstract Twitter sentiment analysis is an automated process of analyzing the text data which determining the opinion or feeling of public tweets from the various fields. For example, in marketing field, political field huge number of tweets is posting with hash tags every moment via internet from one user to another user. This sentiment analysis is a challenging task for the researchers mainly to correct interpretation of context in which certain tweet words are difficult to evaluate what truly is negative and positive statement from the huge corpus of tweet data. This problem violates the integrity of the system and the user reliability can be significantly reduced. In this paper, we identify the each tweet word and we are assigning a meaning into it. The feature work is combined with tweet words, word2vec, stop words and integrated into the deep learning techniques of Convolution neural network model and Long short Term Memory, these algorithms can identify the pattern of stop word counts with its own strategy. Those two models are well trained and applied for IMDB dataset which contains 50,000 movie reviews. With huge amount of twitter data is processed for predicting the sentimental tweets for classification. With the proposed methodology, the samples are experimentally collected from the real-time environment can be discriminated well and the efficacy of the system is improved. The result of Deep Learning algorithms aims to rate the review tweets and also able to identify movie review with testing accuracy as 87.74% and 88.02%.


Author(s):  
Dr. Neeta Verma

One of the most important functions of the human visual system is automatic captioning. Caption generation is one of the more interesting and focused areas of AI, with numerous challenges to overcome. If there is an application that automatically captions the scenes in which a person is present and converts the caption into a clear message, people will benefit from it in a variety of ways. In this, we offer a deep learning model that detects things or features in images automatically, produces descriptions for the images, and transforms the descriptions to audio for louder readout. The model uses pre-trained CNN and LSTM models to perform the task of extracting objects or features to get the captions. In our model, first task is to detect objects within the image using pre trained Mobilenet model of CNN (Convolutional Neural Networks) and therefore the other is to caption the pictures based on the detected objects by using LSTM (Long Short Term Memory) and convert caption into speech to read out louder to the person by using SpeechSynthesisUtterance interface of the Web Speech API. The interface of the model is developed using NodeJS as a backend for the web page. Caption generation entails a number of complex steps, including selecting the dataset, training the model, validating the model, creating pre-trained models to check the images, detecting the images, and finally generating captions.


Teknika ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 62-67
Author(s):  
Faisal Dharma Adhinata ◽  
Diovianto Putra Rakhmadani

The impact of this pandemic affects various sectors in Indonesia, especially in the economic sector, due to the large-scale social restrictions policy to suppress this case's growth. The details of the growth of Covid-19 in Indonesia are still fluctuating and cannot be fully understood. Recently it has been developed by researchers related to the prediction of Covid-19 cases in various countries. One of them is using a machine learning technique approach to predict cases of daily increase Covid-19. However, the use of machine learning techniques results in the MSE error value in the thousands. This high number indicates that the prediction data using the model is still a high error rate compared to the actual data. In this study, we propose a deep learning approach using the Long Short Term Memory (LSTM) method to build a prediction model for the daily increase cases of Covid-19. This study's LSTM model architecture uses the LSTM layer, Dropout layer, Dense, and Linear Activation Function. Based on various hyperparameter experiments, using the number of neurons 10, batch size 32, and epochs 50, the MSE values were 0.0308, RMSE 0.1758, and MAE 0.13. These results prove that the deep learning approach produces a smaller error value than machine learning techniques, even closer to zero.


Online media for news consumption has doubtful advantages. From one perspective, it has minimal expense, simple access, and fast dispersal of data which leads individuals to search out and devour news from online media. On the other hand, it increases the wide spread of "counterfeit news", i.e., inferior quality news with purposefully bogus data. The broad spread of fake news contrarily affects people and society. Hence, fake news detection in social media has become an emerging research topic that is drawing attention from various researchers. In past, many creators proposed the utilization of text mining procedures and AI strategies to examine textual data and helps to foresee the believability of news. With more computational capacities and to deal with enormous datasets, deep learning models present a better presentation over customary text mining strategies and AI methods. Normally deep learning model, for example, LSTM model can identify complex patterns in the data. Long short term memory is a tree organized recurrent neural network (RNN) used to examine variable length sequential information. In our proposed framework we set up a fake news identification model dependent on LSTM neural network. Openly accessible unstructured news datasets are utilized to evaluate the exhibition of the model. The outcome shows the prevalence and exactness of LSTM model over the customary techniques specifically CNN for fake news recognition.


2021 ◽  
Vol 11 (4) ◽  
pp. 41-60
Author(s):  
Sangeetha Rajesh ◽  
Nalini N. J.

The proposed work investigates the impact of Mel Frequency Cepstral Coefficients (MFCC), Chroma DCT Reduced Pitch (CRP), and Chroma Energy Normalized Statistics (CENS) for instrument recognition from monophonic instrumental music clips using deep learning techniques, Bidirectional Recurrent Neural Networks with Long Short-Term Memory (BRNN-LSTM), stacked autoencoders (SAE), and Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM). Initially, MFCC, CENS, and CRP features are extracted from instrumental music clips collected as a dataset from various online libraries. In this work, the deep neural network models have been fabricated by training with extracted features. Recognition rates of 94.9%, 96.8%, and 88.6% are achieved using combined MFCC and CENS features, and 90.9%, 92.2%, and 87.5% are achieved using combined MFCC and CRP features with deep learning models BRNN-LSTM, CNN-LSTM, and SAE, respectively. The experimental results evidence that MFCC features combined with CENS and CRP features at score level revamp the efficacy of the proposed system.


Author(s):  
Claire Brenner ◽  
Jonathan Frame ◽  
Grey Nearing ◽  
Karsten Schulz

ZusammenfassungDie Verdunstung ist ein entscheidender Prozess im globalen Wasser‑, Energie- sowie Kohlenstoffkreislauf. Daten zur räumlich-zeitlichen Dynamik der Verdunstung sind daher von großer Bedeutung für Klimamodellierungen, zur Abschätzung der Auswirkungen der Klimakrise sowie nicht zuletzt für die Landwirtschaft.In dieser Arbeit wenden wir zwei Machine- und Deep Learning-Methoden für die Vorhersage der Verdunstung mit täglicher und halbstündlicher Auflösung für Standorte des FLUXNET-Datensatzes an. Das Long Short-Term Memory Netzwerk ist ein rekurrentes neuronales Netzwerk, welchen explizit Speichereffekte berücksichtigt und Zeitreihen der Eingangsgrößen analysiert (entsprechend physikalisch-basierten Wasserbilanzmodellen). Dem gegenüber gestellt werden Modellierungen mit XGBoost, einer Entscheidungsbaum-Methode, die in diesem Fall nur Informationen für den zu bestimmenden Zeitschritt erhält (entsprechend physikalisch-basierten Energiebilanzmodellen). Durch diesen Vergleich der beiden Modellansätze soll untersucht werden, inwieweit sich durch die Berücksichtigung von Speichereffekten Vorteile für die Modellierung ergeben.Die Analysen zeigen, dass beide Modellansätze gute Ergebnisse erzielen und im Vergleich zu einem ausgewerteten Referenzdatensatz eine höhere Modellgüte aufweisen. Vergleicht man beide Modelle, weist das LSTM im Mittel über alle 153 untersuchten Standorte eine bessere Übereinstimmung mit den Beobachtungen auf. Allerdings zeigt sich eine Abhängigkeit der Güte der Verdunstungsvorhersage von der Vegetationsklasse des Standorts; vor allem wärmere, trockene Standorte mit kurzer Vegetation werden durch das LSTM besser repräsentiert, wohingegen beispielsweise in Feuchtgebieten XGBoost eine bessere Übereinstimmung mit den Beobachtung liefert. Die Relevanz von Speichereffekten scheint daher zwischen Ökosystemen und Standorten zu variieren.Die präsentierten Ergebnisse unterstreichen das Potenzial von Methoden der künstlichen Intelligenz für die Beschreibung der Verdunstung.


Sign in / Sign up

Export Citation Format

Share Document