Overview of Algorithms for Natural Language Processing and Time Series Analyses

Author(s):  
James Feghali ◽  
Adrian E. Jimenez ◽  
Andrew T. Schilling ◽  
Tej D. Azad
Author(s):  
Seonho Kim ◽  
Jungjoon Kim ◽  
Hong-Woo Chun

Interest in research involving health-medical information analysis based on artificial intelligence, especially for deep learning techniques, has recently been increasing. Most of the research in this field has been focused on searching for new knowledge for predicting and diagnosing disease by revealing the relation between disease and various information features of data. These features are extracted by analyzing various clinical pathology data, such as EHR (electronic health records), and academic literature using the techniques of data analysis, natural language processing, etc. However, still needed are more research and interest in applying the latest advanced artificial intelligence-based data analysis technique to bio-signal data, which are continuous physiological records, such as EEG (electroencephalography) and ECG (electrocardiogram). Unlike the other types of data, applying deep learning to bio-signal data, which is in the form of time series of real numbers, has many issues that need to be resolved in preprocessing, learning, and analysis. Such issues include leaving feature selection, learning parts that are black boxes, difficulties in recognizing and identifying effective features, high computational complexities, etc. In this paper, to solve these issues, we provide an encoding-based Wave2vec time series classifier model, which combines signal-processing and deep learning-based natural language processing techniques. To demonstrate its advantages, we provide the results of three experiments conducted with EEG data of the University of California Irvine, which are a real-world benchmark bio-signal dataset. After converting the bio-signals (in the form of waves), which are a real number time series, into a sequence of symbols or a sequence of wavelet patterns that are converted into symbols, through encoding, the proposed model vectorizes the symbols by learning the sequence using deep learning-based natural language processing. The models of each class can be constructed through learning from the vectorized wavelet patterns and training data. The implemented models can be used for prediction and diagnosis of diseases by classifying the new data. The proposed method enhanced data readability and intuition of feature selection and learning processes by converting the time series of real number data into sequences of symbols. In addition, it facilitates intuitive and easy recognition, and identification of influential patterns. Furthermore, real-time large-capacity data analysis is facilitated, which is essential in the development of real-time analysis diagnosis systems, by drastically reducing the complexity of calculation without deterioration of analysis performance by data simplification through the encoding process.


2019 ◽  
Author(s):  
Josephine Lukito ◽  
Prathusha K Sarma ◽  
Jordan Foley ◽  
Aman Abhishek

2021 ◽  
Author(s):  
Michael Yi ◽  
Pradeepkumar Ashok ◽  
Dawson Ramos ◽  
Taylor Thetford ◽  
Spencer Bohlander ◽  
...  

Abstract Kick and lost circulation events are large contributors to non-productive time. Therefore, early detection of these events is crucial. In the absence of good flow in and flow out sensors, pit volume trends offer the best possibility for influx/loss detection, but errors occur since external mud addition /removal to the pits is not monitored or sensed. The goal is to reduce false alarms caused by such mud additions and removal. Data analyzed from over 100s of wells in North America show that mud addition and removal results in certain unique pit volume gain / loss trends, and these trends are quite different from a kick, a lost circulation or a wellbore breathing event trend. Additionally, driller's input text memos into the data aggregation system (EDR) and these memos often provide information with regards to pit operations. In this paper, we introduce a method that utilizes a Bayesian network to aggregate trends detected in time-series data with events identified by natural language processing (NLP) of driller memos critical to greatly improve the accuracy and robustness of kick and lost circulation detection. The methodology was implemented in software that is currently running on rigs in North America. During the test phase, we applied it on several historical wells with lost circulation events and several historical wells with kick events. We were able to identify and quantify the losses even during connections and mud additions, where usually pit volume was increasing despite continual losses. Also, the real-time simultaneous analysis of driller memos provides context to pit volume trends and further reduce the false alarms. The algorithm is also able to take account of pit volume that was reduced due to drilling. Quantification of the losses offers more insight into what lost circulation material to use and the changes in the rate of loss while drilling. This approach was very robust in discovering kicks as well and differentiating it from mud removal and wellbore breathing events. These historical case studies will be detailed in this paper. This is the first time that patterns in mud volume addition and removal detected from time-series data have been used along with driller memos using NLP to reduce false alerts in kick and lost circulation detection. This approach is particularly useful in identifying kick and lost circulation events from pit volume data, especially when good flow in and flow out sensors are not available. The paper provides guidance on how real-time sensor data can be combined with textual data to improve the outputs from an advisory system.


2020 ◽  
pp. 3-17
Author(s):  
Peter Nabende

Natural Language Processing for under-resourced languages is now a mainstream research area. However, there are limited studies on Natural Language Processing applications for many indigenous East African languages. As a contribution to covering the current gap of knowledge, this paper focuses on evaluating the application of well-established machine translation methods for one heavily under-resourced indigenous East African language called Lumasaaba. Specifically, we review the most common machine translation methods in the context of Lumasaaba including both rule-based and data-driven methods. Then we apply a state of the art data-driven machine translation method to learn models for automating translation between Lumasaaba and English using a very limited data set of parallel sentences. Automatic evaluation results show that a transformer-based Neural Machine Translation model architecture leads to consistently better BLEU scores than the recurrent neural network-based models. Moreover, the automatically generated translations can be comprehended to a reasonable extent and are usually associated with the source language input.


Sign in / Sign up

Export Citation Format

Share Document