Efficient End-to-End Asynchronous Time-Series Modeling With Deep Learning to Predict Customer Attrition

Author(s):  
Victor Potapenko ◽  
Malek Adjouadi ◽  
Naphtali Rishe

Modeling time-series data with asynchronous, multi-cardinal, and uneven patterns presents several unique challenges that may impede convergence of supervised machine learning algorithms, or significantly increase resource requirements, thus rendering modeling efforts infeasible in resource-constrained environments. The authors propose two approaches to multi-class classification of asynchronous time-series data. In the first approach, they create a baseline by reducing the time-series data using a statistical approach and training a model based on gradient boosted trees. In the second approach, they implement a fully convolutional network (FCN) and train it on asynchronous data without any special feature engineering. Evaluation of results shows that FCN performs as well as the gradient boosting based on mean F1-score without computationally complex time-series feature engineering. This work has been applied in the prediction of customer attrition at a large retail automotive finance company.

Author(s):  
Gudipally Chandrashakar

In this article, we used historical time series data up to the current day gold price. In this study of predicting gold price, we consider few correlating factors like silver price, copper price, standard, and poor’s 500 value, dollar-rupee exchange rate, Dow Jones Industrial Average Value. Considering the prices of every correlating factor and gold price data where dates ranging from 2008 January to 2021 February. Few algorithms of machine learning are used to analyze the time-series data are Random Forest Regression, Support Vector Regressor, Linear Regressor, ExtraTrees Regressor and Gradient boosting Regression. While seeing the results the Extra Tree Regressor algorithm gives the predicted value of gold prices more accurately.


2021 ◽  
Vol 13 (3) ◽  
pp. 67
Author(s):  
Eric Hitimana ◽  
Gaurav Bajpai ◽  
Richard Musabe ◽  
Louis Sibomana ◽  
Jayavel Kayalvizhi

Many countries worldwide face challenges in controlling building incidence prevention measures for fire disasters. The most critical issues are the localization, identification, detection of the room occupant. Internet of Things (IoT) along with machine learning proved the increase of the smartness of the building by providing real-time data acquisition using sensors and actuators for prediction mechanisms. This paper proposes the implementation of an IoT framework to capture indoor environmental parameters for occupancy multivariate time-series data. The application of the Long Short Term Memory (LSTM) Deep Learning algorithm is used to infer the knowledge of the presence of human beings. An experiment is conducted in an office room using multivariate time-series as predictors in the regression forecasting problem. The results obtained demonstrate that with the developed system it is possible to obtain, process, and store environmental information. The information collected was applied to the LSTM algorithm and compared with other machine learning algorithms. The compared algorithms are Support Vector Machine, Naïve Bayes Network, and Multilayer Perceptron Feed-Forward Network. The outcomes based on the parametric calibrations demonstrate that LSTM performs better in the context of the proposed application.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1908
Author(s):  
Chao Ma ◽  
Xiaochuan Shi ◽  
Wei Li ◽  
Weiping Zhu

In the past decade, time series data have been generated from various fields at a rapid speed, which offers a huge opportunity for mining valuable knowledge. As a typical task of time series mining, Time Series Classification (TSC) has attracted lots of attention from both researchers and domain experts due to its broad applications ranging from human activity recognition to smart city governance. Specifically, there is an increasing requirement for performing classification tasks on diverse types of time series data in a timely manner without costly hand-crafting feature engineering. Therefore, in this paper, we propose a framework named Edge4TSC that allows time series to be processed in the edge environment, so that the classification results can be instantly returned to the end-users. Meanwhile, to get rid of the costly hand-crafting feature engineering process, deep learning techniques are applied for automatic feature extraction, which shows competitive or even superior performance compared to state-of-the-art TSC solutions. However, because time series presents complex patterns, even deep learning models are not capable of achieving satisfactory classification accuracy, which motivated us to explore new time series representation methods to help classifiers further improve the classification accuracy. In the proposed framework Edge4TSC, by building the binary distribution tree, a new time series representation method was designed for addressing the classification accuracy concern in TSC tasks. By conducting comprehensive experiments on six challenging time series datasets in the edge environment, the potential of the proposed framework for its generalization ability and classification accuracy improvement is firmly validated with a number of helpful insights.


2019 ◽  
Vol 11 (7) ◽  
pp. 861 ◽  
Author(s):  
Hao Jiang ◽  
Dan Li ◽  
Wenlong Jing ◽  
Jianhui Xu ◽  
Jianxi Huang ◽  
...  

More than 90% of the sugar production in China comes from sugarcane, which is widely grown in South China. Optical image time series have proven to be efficient for sugarcane mapping. There are, however, two limitations associated with previous research: one is that the critical observations during the sugarcane growing season are limited due to frequent cloudy weather in South China; the other is that the classification method requires imagery time series covering the entire growing season, which reduces the time efficiency. The Sentinel-1A (S1A) synthetic aperture radar (SAR) data featuring relatively high spatial-temporal resolution provides an ideal data source for all-weather observations. In this study, we attempted to develop a method for the early season mapping of sugarcane. First, we proposed a framework consisting of two procedures: initial sugarcane mapping using the S1A SAR imagery time series, followed by non-vegetation removal using Sentinel-2 optical imagery. Second, we tested the framework using an incremental classification strategy based on S1A imagery covering the entire 2017–2018 sugarcane season. The study area was in Suixi and Leizhou counties of Zhanjiang city, China. Results indicated that an acceptable accuracy, in terms of Kappa coefficient, can be achieved to a level above 0.902 using time series three months before sugarcane harvest. In general, sugarcane mapping utilizing the combination of VH + VV as well as VH polarization alone outperformed mapping using VV alone. Although the XGBoost classifier with VH + VV polarization achieved a maximum accuracy that was slightly lower than the random forest (RF) classifier, the XGBoost shows promising performance in that it was more robust to overfitting with noisy VV time series and the computation speed was 7.7 times faster than RF classifier. The total sugarcane areas in Suixi and Leizhou for the 2017–2018 harvest year estimated by this study were approximately 598.95 km2 and 497.65 km2, respectively. The relative accuracy of the total sugarcane mapping area was approximately 86.3%.


2021 ◽  
Author(s):  
Dhairya Vyas

In terms of Machine Learning, the majority of the data can be grouped into four categories: numerical data, category data, time-series data, and text. We use different classifiers for different data properties, such as the Supervised; Unsupervised; and Reinforcement. Each Categorises has classifier we have tested almost all machine learning methods and make analysis among them.


2019 ◽  
Vol 14 ◽  
pp. 155892501988346 ◽  
Author(s):  
Mine Seçkin ◽  
Ahmet Çağdaş Seçkin ◽  
Aysun Coşkun

Although textile production is heavily automation-based, it is viewed as a virgin area with regard to Industry 4.0. When the developments are integrated into the textile sector, efficiency is expected to increase. When data mining and machine learning studies are examined in textile sector, it is seen that there is a lack of data sharing related to production process in enterprises because of commercial concerns and confidentiality. In this study, a method is presented about how to simulate a production process and how to make regression from the time series data with machine learning. The simulation has been prepared for the annual production plan, and the corresponding faults based on the information received from textile glove enterprise and production data have been obtained. Data set has been applied to various machine learning methods within the scope of supervised learning to compare the learning performances. The errors that occur in the production process have been created using random parameters in the simulation. In order to verify the hypothesis that the errors may be forecast, various machine learning algorithms have been trained using data set in the form of time series. The variable showing the number of faulty products could be forecast very successfully. When forecasting the faulty product parameter, the random forest algorithm has demonstrated the highest success. As these error values have given high accuracy even in a simulation that works with uniformly distributed random parameters, highly accurate forecasts can be made in real-life applications as well.


2021 ◽  
Vol 3 ◽  
Author(s):  
Peter Goodin ◽  
Andrew J. Gardner ◽  
Nasim Dokani ◽  
Ben Nizette ◽  
Saeed Ahmadizadeh ◽  
...  

Background: Exposure to thousands of head and body impacts during a career in contact and collision sports may contribute to current or later life issues related to brain health. Wearable technology enables the measurement of impact exposure. The validation of impact detection is required for accurate exposure monitoring. In this study, we present a method of automatic identification (classification) of head and body impacts using an instrumented mouthguard, video-verified impacts, and machine-learning algorithms.Methods: Time series data were collected via the Nexus A9 mouthguard from 60 elite level men (mean age = 26.33; SD = 3.79) and four women (mean age = 25.50; SD = 5.91) from the Australian Rules Football players from eight clubs, participating in 119 games during the 2020 season. Ground truth data labeling on the captures used in this machine learning study was performed through the analysis of game footage by two expert video reviewers using SportCode and Catapult Vision. The visual labeling process occurred independently of the mouthguard time series data. True positive captures (captures where the reviewer directly observed contact between the mouthguard wearer and another player, the ball, or the ground) were defined as hits. Spectral and convolutional kernel based features were extracted from time series data. Performances of untuned classification algorithms from scikit-learn in addition to XGBoost were assessed to select the best performing baseline method for tuning.Results: Based on performance, XGBoost was selected as the classifier algorithm for tuning. A total of 13,712 video verified captures were collected and used to train and validate the classifier. True positive detection ranged from 94.67% in the Test set to 100% in the hold out set. True negatives ranged from 95.65 to 96.83% in the test and rest sets, respectively.Discussion and conclusion: This study suggests the potential for high performing impact classification models to be used for Australian Rules Football and highlights the importance of frequencies <150 Hz for the identification of these impacts.


2021 ◽  
Author(s):  
Elham Fijani ◽  
Khabat Khosravi ◽  
Rahim Barzegar ◽  
John Quilty ◽  
Jan Adamowski ◽  
...  

Abstract Random Tree (RT) and Iterative Classifier Optimizer (ICO) based on Alternating Model Tree (AMT) regressor machine learning (ML) algorithms coupled with Bagging (BA) or Additive Regression (AR) hybrid algorithms were applied to forecasting multistep ahead (up to three months) Lake Superior and Lake Michigan water level (WL). Partial autocorrelation (PACF) of each lake’s WL time series estimated the most important lag times — up to five months in both lakes — as potential inputs. The WL time series data was partitioned into training (from 1918 to 1988) and testing (from 1989 to 2018) for model building and evaluation, respectively. Developed algorithms were validated through statistically and visually based metric using testing data. Although both hybrid ensemble algorithms improved individual ML algorithms’ performance, the BA algorithm outperformed the AR algorithm. As a novel model in forecasting problems, the ICO algorithm was shown to have great potential in generating robust multistep lake WL forecasts.


Sign in / Sign up

Export Citation Format

Share Document