Scale-dependent impacts of natural and anthropogenic drivers on groundwater level dynamics – analysis of shallow coastal aquifers using deep learning

Author(s):  
Annika Nolte ◽  
Steffen Bender ◽  
Jens Hartmann ◽  
Stefan Baltruschat

<p>Groundwater level dynamics are very sensitive to groundwater withdrawal, but their effects and magnitude – especially in combination with natural fluctuations – must be often estimated due to missing or inaccurate information of all local pumping activities in an area. This study examines the potential of deep learning applications at large spatial scales to estimate the parts of local withdrawal activities and natural impacts – meteorological and environmental – on groundwater level dynamics. We will use big data elements from a newly constructed global groundwater database in a single long-term short-term memory (LSTM) model to examine scale-dependent impacts. The data used in the model consists of continuous groundwater level observations and catchment attributes – spatially heterogeneous but temporally static catchment attributes (e.g. topography) and continuous observations of the meteorological forcing (e.g. precipitation) – from several hundred catchments of shallow coastal aquifers of different continents. Our approach is to use only freely accessible data sources covering the global scale as catchment attributes. We will test how relationships between groundwater level dynamics and catchment attributes, at different scales, can improve interpretability of groundwater level simulations using deep learning techniques.</p>

Author(s):  
Lu Gao ◽  
Yao Yu ◽  
Yi Hao Ren ◽  
Pan Lu

Pavement maintenance and rehabilitation (M&R) records are important as they provide documentation that M&R treatment is being performed and completed appropriately. Moreover, the development of pavement performance models relies heavily on the quality of the condition data collected and on the M&R records. However, the history of pavement M&R activities is often missing or unavailable to highway agencies for many reasons. Without accurate M&R records, it is difficult to determine if a condition change between two consecutive inspections is the result of M&R intervention, deterioration, or measurement errors. In this paper, we employed deep-learning networks of a convolutional neural network (CNN) model, a long short-term memory (LSTM) model, and a CNN-LSTM combination model to automatically detect if an M&R treatment was applied to a pavement section during a given time period. Unlike conventional analysis methods so far followed, deep-learning techniques do not require any feature extraction. The maximum accuracy obtained for test data is 87.5% using CNN-LSTM.


2018 ◽  
Author(s):  
Andre Lamurias ◽  
Luka A. Clarke ◽  
Francisco M. Couto

AbstractRecent studies have proposed deep learning techniques, namely recurrent neural networks, to improve biomedical text mining tasks. However, these techniques rarely take advantage of existing domain-specific resources, such as ontologies. In Life and Health Sciences there is a vast and valuable set of such resources publicly available, which are continuously being updated. Biomedical ontologies are nowadays a mainstream approach to formalize existing knowledge about entities, such as genes, chemicals, phenotypes, and disorders. These resources contain supplementary information that may not be yet encoded in training data, particularly in domains with limited labeled data.We propose a new model, BO-LSTM, that takes advantage of domain-specific ontologies, by representing each entity as the sequence of its ancestors in the ontology. We implemented BO-LSTM as a recurrent neural network with long short-term memory units and using an open biomedical ontology, which in our case-study was Chemical Entities of Biological Interest (ChEBI). We assessed the performance of BO-LSTM on detecting and classifying drug-drug interactions in a publicly available corpus from an international challenge, composed of 792 drug descriptions and 233 scientific abstracts. By using the domain-specific ontology in addition to word embeddings and WordNet, BO-LSTM improved both the F1-score of the detection and classification of drug-drug interactions, particularly in a document set with a limited number of annotations. Our findings demonstrate that besides the high performance of current deep learning techniques, domain-specific ontologies can still be useful to mitigate the lack of labeled data.Author summaryA high quantity of biomedical information is only available in documents such as scientific articles and patents. Due to the rate at which new documents are produced, we need automatic methods to extract useful information from them. Text mining is a subfield of information retrieval which aims at extracting relevant information from text. Scientific literature is a challenge to text mining because of the complexity and specificity of the topics approached. In recent years, deep learning has obtained promising results in various text mining tasks by exploring large datasets. On the other hand, ontologies provide a detailed and sound representation of a domain and have been developed to diverse biomedical domains. We propose a model that combines deep learning algorithms with biomedical ontologies to identify relations between concepts in text. We demonstrate the potential of this model to extract drug-drug interactions from abstracts and drug descriptions. This model can be applied to other biomedical domains using an annotated corpus of documents and an ontology related to that domain to train a new classifier.


2020 ◽  
Vol 3 (1) ◽  
pp. 445-454
Author(s):  
Celal Buğra Kaya ◽  
Alperen Yılmaz ◽  
Gizem Nur Uzun ◽  
Zeynep Hilal Kilimci

Pattern classification is related with the automatic finding of regularities in dataset through the utilization of various learning techniques. Thus, the classification of the objects into a set of categories or classes is provided. This study is undertaken to evaluate deep learning methodologies to the classification of stock patterns. In order to classify patterns that are obtained from stock charts, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long-short term memory networks (LSTMs) are employed. To demonstrate the efficiency of proposed model in categorizing patterns, hand-crafted image dataset is constructed from stock charts in Istanbul Stock Exchange and NASDAQ Stock Exchange. Experimental results show that the usage of convolutional neural networks exhibits superior classification success in recognizing patterns compared to the other deep learning methodologies.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Kazi Nabiul Alam ◽  
Md Shakib Khan ◽  
Abdur Rab Dhruba ◽  
Mohammad Monirujjaman Khan ◽  
Jehad F. Al-Amri ◽  
...  

The COVID-19 pandemic has had a devastating effect on many people, creating severe anxiety, fear, and complicated feelings or emotions. After the initiation of vaccinations against coronavirus, people’s feelings have become more diverse and complex. Our aim is to understand and unravel their sentiments in this research using deep learning techniques. Social media is currently the best way to express feelings and emotions, and with the help of Twitter, one can have a better idea of what is trending and going on in people’s minds. Our motivation for this research was to understand the diverse sentiments of people regarding the vaccination process. In this research, the timeline of the collected tweets was from December 21 to July21. The tweets contained information about the most common vaccines available recently from across the world. The sentiments of people regarding vaccines of all sorts were assessed using the natural language processing (NLP) tool, Valence Aware Dictionary for sEntiment Reasoner (VADER). Initializing the polarities of the obtained sentiments into three groups (positive, negative, and neutral) helped us visualize the overall scenario; our findings included 33.96% positive, 17.55% negative, and 48.49% neutral responses. In addition, we included our analysis of the timeline of the tweets in this research, as sentiments fluctuated over time. A recurrent neural network- (RNN-) oriented architecture, including long short-term memory (LSTM) and bidirectional LSTM (Bi-LSTM), was used to assess the performance of the predictive models, with LSTM achieving an accuracy of 90.59% and Bi-LSTM achieving 90.83%. Other performance metrics such as precision,, F1-score, and a confusion matrix were also used to validate our models and findings more effectively. This study improves understanding of the public’s opinion on COVID-19 vaccines and supports the aim of eradicating coronavirus from the world.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Sofia B. Dias ◽  
Sofia J. Hadjileontiadou ◽  
José Diniz ◽  
Leontios J. Hadjileontiadis

AbstractCoronavirus (Covid-19) pandemic has imposed a complete shut-down of face-to-face teaching to universities and schools, forcing a crash course for online learning plans and technology for students and faculty. In the midst of this unprecedented crisis, video conferencing platforms (e.g., Zoom, WebEx, MS Teams) and learning management systems (LMSs), like Moodle, Blackboard and Google Classroom, are being adopted and heavily used as online learning environments (OLEs). However, as such media solely provide the platform for e-interaction, effective methods that can be used to predict the learner’s behavior in the OLEs, which should be available as supportive tools to educators and metacognitive triggers to learners. Here we show, for the first time, that Deep Learning techniques can be used to handle LMS users’ interaction data and form a novel predictive model, namely DeepLMS, that can forecast the quality of interaction (QoI) with LMS. Using Long Short-Term Memory (LSTM) networks, DeepLMS results in average testing Root Mean Square Error (RMSE) $$<0.009$$ < 0.009 , and average correlation coefficient between ground truth and predicted QoI values $$r\ge 0.97$$ r ≥ 0.97 $$(p<0.05)$$ ( p < 0.05 ) , when tested on QoI data from one database pre- and two ones during-Covid-19 pandemic. DeepLMS personalized QoI forecasting scaffolds user’s online learning engagement and provides educators with an evaluation path, additionally to the content-related assessment, enriching the overall view on the learners’ motivation and participation in the learning process.


2018 ◽  
Vol 7 (3.27) ◽  
pp. 258 ◽  
Author(s):  
Yecheng Yao ◽  
Jungho Yi ◽  
Shengjun Zhai ◽  
Yuwen Lin ◽  
Taekseung Kim ◽  
...  

The decentralization of cryptocurrencies has greatly reduced the level of central control over them, impacting international relations and trade. Further, wide fluctuations in cryptocurrency price indicate an urgent need for an accurate way to forecast this price. This paper proposes a novel method to predict cryptocurrency price by considering various factors such as market cap, volume, circulating supply, and maximum supply based on deep learning techniques such as the recurrent neural network (RNN) and the long short-term memory (LSTM),which are effective learning models for training data, with the LSTM being better at recognizing longer-term associations. The proposed approach is implemented in Python and validated for benchmark datasets. The results verify the applicability of the proposed approach for the accurate prediction of cryptocurrency price.


Computers ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 4 ◽  
Author(s):  
Jurgita Kapočiūtė-Dzikienė ◽  
Robertas Damaševičius ◽  
Marcin Woźniak

We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were used with the features based on the lexical, morphological, and character information. The deep learning approaches were applied on the top of two types of word embeddings (Vord2Vec continuous bag-of-words with negative sampling and FastText). Both traditional and deep learning approaches had to solve the positive/negative/neutral sentiment classification task on the balanced and full dataset versions. The best deep learning results (reaching 0.706 of accuracy) were achieved on the full dataset with CNN applied on top of the FastText embeddings, replaced emoticons, and eliminated diacritics. The traditional machine learning approaches demonstrated the best performance (0.735 of accuracy) on the full dataset with the NBM method, replaced emoticons, restored diacritics, and lemma unigrams as features. Although traditional machine learning approaches were superior when compared to the deep learning methods; deep learning demonstrated good results when applied on the small datasets.


2020 ◽  
Author(s):  
Frederik Kratzert ◽  
Daniel Klotz ◽  
Sepp Hochreiter ◽  
Grey S. Nearing

Abstract. A deep learning rainfall-runoff model can take multiple meteorological forcing products as inputs and learn to combine them in spatially and temporally dynamic ways. This is demonstrated using Long Short Term Memory networks (LSTMs) trained over basins in the continental US using the CAMELS data set. Using multiple precipitation products (NLDAS, Maurer, DayMet) in a single LSTM significantly improved simulation accuracy relative to using only individual precipitation products. A sensitivity analysis showed that the LSTM learned to utilize different precipitation products in different ways in different basins and for simulating different parts of the hydrograph in individual basins.


10.6036/10007 ◽  
2021 ◽  
Vol 96 (5) ◽  
pp. 528-533
Author(s):  
XAVIER LARRIVA NOVO ◽  
MARIO VEGA BARBAS ◽  
VICTOR VILLAGRA ◽  
JULIO BERROCAL

Cybersecurity has stood out in recent years with the aim of protecting information systems. Different methods, techniques and tools have been used to make the most of the existing vulnerabilities in these systems. Therefore, it is essential to develop and improve new technologies, as well as intrusion detection systems that allow detecting possible threats. However, the use of these technologies requires highly qualified cybersecurity personnel to analyze the results and reduce the large number of false positives that these technologies presents in their results. Therefore, this generates the need to research and develop new high-performance cybersecurity systems that allow efficient analysis and resolution of these results. This research presents the application of machine learning techniques to classify real traffic, in order to identify possible attacks. The study has been carried out using machine learning tools applying deep learning algorithms such as multi-layer perceptron and long-short-term-memory. Additionally, this document presents a comparison between the results obtained by applying the aforementioned algorithms and algorithms that are not deep learning, such as: random forest and decision tree. Finally, the results obtained are presented, showing that the long-short-term-memory algorithm is the one that provides the best results in relation to precision and logarithmic loss.


2021 ◽  
Vol 5 (4) ◽  
pp. 544
Author(s):  
Antonius Angga Kurniawan ◽  
Metty Mustikasari

This research aims to implement deep learning techniques to determine fact and fake news in Indonesian language. The methods used are Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). The stages of the research consisted of collecting data, labeling data, preprocessing data, word embedding, splitting data, forming CNN and LSTM models, evaluating, testing new input data and comparing evaluations of the established CNN and LSTM models. The Data are collected from a fact and fake news provider site that is valid, namely TurnbackHoax.id. There are 1786 news used in this study, with 802 fact and 984 fake news. The results indicate that the CNN and LSTM methods were successfully applied to determine fact and fake news in Indonesian language properly. CNN has an accuracy test, precision and recall value of 0.88, while the LSTM model has an accuracy test and precision value of 0.84 and a recall of 0.83. In testing the new data input, all of the predictions obtained by CNN are correct, while the prediction results obtained by LSTM have 1 wrong prediction. Based on the evaluation results and the results of testing the new data input, the model produced by the CNN method is better than the model produced by the LSTM method.


Sign in / Sign up

Export Citation Format

Share Document