Pattern-based dual learning for point-of-interest (POI) recommendation

2020 ◽  
Vol 120 (10) ◽  
pp. 1901-1921
Author(s):  
Tipajin Thaipisutikul ◽  
Yi-Cheng Chen

PurposeTourism spot or point-of-interest (POI) recommendation has become a common service in people's daily life. The purpose of this paper is to model users' check-in history in order to predict a set of locations that a user may soon visit.Design/methodology/approachThe authors proposed a novel learning-based method, the pattern-based dual learning POI recommendation system as a solution to consider users' interests and the uniformity of popular POI patterns when making recommendations. Differing from traditional long short-term memory (LSTM), a new users’ regularity–POIs’ popularity patterns long short-term memory (UP-LSTM) model was developed to concurrently combine the behaviors of a specific user and common users.FindingsThe authors introduced the concept of dual learning for POI recommendation. Several performance evaluations were conducted on real-life mobility data sets to demonstrate the effectiveness and practicability of POI recommendations. The metrics such as hit rate, precision, recall and F-measure were used to measure the capability of ranking and precise prediction of the proposed model over all baselines. The experimental results indicated that the proposed UP-LSTM model consistently outperformed the state-of-the-art models in all metrics by a large margin.Originality/valueThis study contributes to the existing literature by incorporating a novel pattern–based technique to analyze how the popularity of POIs affects the next move of a particular user. Also, the authors have proposed an effective fusing scheme to boost the prediction performance in the proposed UP-LSTM model. The experimental results and discussions indicate that the combination of the user's regularity and the POIs’ popularity patterns in PDLRec could significantly enhance the performance of POI recommendation.

Author(s):  
Dyapa Sravan Reddy ◽  
Lakshmi Prasanna Reddy ◽  
Kandibanda Sai Santhosh ◽  
Virrat Devaser

SEO Analyst pays a lot of time finding relevant tags for their articles and in some cases, they are unaware of the content topics. The current proposed ML model will recommend content-related tags so that the Content writers/SEO analyst will be having an overview regarding the content and minimizes their time spent on unknown articles. Machine Learning algorithms have a plethora of applications and the extent of their real-life implementations cannot be estimated. Using algorithms like One vs Rest (OVR), Long Short-Term Memory (LSTM), this study has analyzed how Machine Learning can be useful for tag suggestions for a topic. The training of the model with One vs Rest turned out to deliver more accurate results than others. This Study certainly answers how One vs Rest is used for tag suggestions that are needed to promote a website and further studies are required to suggest keywords required.


2019 ◽  
Vol 120 (3) ◽  
pp. 425-441 ◽  
Author(s):  
Sonali Shankar ◽  
P. Vigneswara Ilavarasan ◽  
Sushil Punia ◽  
Surya Prakash Singh

Purpose Better forecasting always leads to better management and planning of the operations. The container throughput data are complex and often have multiple seasonality. This makes it difficult to forecast accurately. The purpose of this paper is to forecast container throughput using deep learning methods and benchmark its performance over other traditional time-series methods. Design/methodology/approach In this study, long short-term memory (LSTM) networks are implemented to forecast container throughput. The container throughput data of the Port of Singapore are used for empirical analysis. The forecasting performance of the LSTM model is compared with seven different time-series forecasting methods, namely, autoregressive integrated moving average (ARIMA), simple exponential smoothing, Holt–Winter’s, error-trend-seasonality, trigonometric regressors (TBATS), neural network (NN) and ARIMA + NN. The relative error matrix is used to analyze the performance of the different models with respect to bias, accuracy and uncertainty. Findings The results showed that LSTM outperformed all other benchmark methods. From a statistical perspective, the Diebold–Mariano test is also conducted to further substantiate better forecasting performance of LSTM over other counterpart methods. Originality/value The proposed study is a contribution to the literature on the container throughput forecasting and adds value to the supply chain theory of forecasting. Second, this study explained the architecture of the deep-learning-based LSTM method and discussed in detail the steps to implement it.


2021 ◽  
Vol 3 (2) ◽  
pp. 3-18
Author(s):  
Partha Mukherjee ◽  
Youakim Badr ◽  
Srushti Karvekar ◽  
Shanmugapriya Viswanathan

The world currently is going through a serious pandemic due to the coronavirus disease (COVID-19). In this study, we investigate the gene structure similarity of coronavirus genomes isolated from COVID-19 patients, Severe Acute Respiratory Syndrome (SARS) patients and bats genes. We also explore the extent of similarity between their genome structures to find if the new coronavirus is similar to either of the other genome structures. Our experimental results show that there is 82.42% similarity between the CoV-2 genome structure and the bat genome structure. Moreover, we have used a bidirectional Gated Recurrent Unit (GRU) model as the deep learning technique and an improved variant of Recurrent Neural networks (i.e., Bidirectional Long Short Term Memory model) to classify the protein families of these genomes to isolate the prominent protein family accession. The accuracy of Gated Recurrent Unit (GRU) is 98% for labeled protein sequences against the protein families. By comparing the performance of the Gated Recurrent Unit (GRU) model with the Bidirectional Long Short Term Memory (Bi-LSTM) model results, we found that the GRU model is 1.6% more accurate than the Bi-LSTM model for our multiclass protein classification problem. Our experimental results would be further support medical research purposes in targeting the protein family similarity to better understand the coronavirus genomic structure.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Satish Kumar ◽  
Tushar Kolekar ◽  
Ketan Kotecha ◽  
Shruti Patil ◽  
Arunkumar Bongale

Purpose Excessive tool wear is responsible for damage or breakage of the tool, workpiece, or machining center. Thus, it is crucial to examine tool conditions during the machining process to improve its useful functional life and the surface quality of the final product. AI-based tool wear prediction techniques have proven to be effective in estimating the Remaining Useful Life (RUL) of the cutting tool. However, the model prediction needs improvement in terms of accuracy.Design/methodology/approachThis paper represents a methodology of fusing a feature selection technique along with state-of-the-art deep learning models. The authors have used NASA milling data sets along with vibration signals for tool wear prediction and performance analysis in 15 different fault scenarios. Multiple steps are used for the feature selection and ranking. Different Long Short-Term Memory (LSTM) approaches are used to improve the overall prediction accuracy of the model for tool wear prediction. LSTM models' performance is evaluated using R-square, Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) parameters.FindingsThe R-square accuracy of the hybrid model is consistently high and has low MAE, MAPE and RMSE values. The average R-square score values for LSTM, Bidirection, Encoder–Decoder and Hybrid LSTM are 80.43, 84.74, 94.20 and 97.85%, respectively, and corresponding average MAPE values are 23.46, 22.200, 9.5739 and 6.2124%. The hybrid model shows high accuracy as compared to the remaining LSTM models.Originality/value The low variance, Spearman Correlation Coefficient and Random Forest Regression methods are used to select the most significant feature vectors for training the miscellaneous LSTM model versions and highlight the best approach. The selected features pass to different LSTM models like Bidirectional, Encoder–Decoder and Hybrid LSTM for tool wear prediction. The Hybrid LSTM approach shows a significant improvement in tool wear prediction.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Neetika Jain ◽  
Sangeeta Mittal

PurposeA cost-effective way to achieve fuel economy is to reinforce positive driving behaviour. Driving behaviour can be controlled if drivers can be alerted for behaviour that results in poor fuel economy. Fuel consumption must be tracked and monitored instantaneously rather than tracking average fuel economy for the entire trip duration. A single-step application of machine learning (ML) is not sufficient to model prediction of instantaneous fuel consumption and detection of anomalous fuel economy. The study designs an ML pipeline to track and monitor instantaneous fuel economy and detect anomalies.Design/methodology/approachThis research iteratively applies different variations of a two-step ML pipeline to the driving dataset for hatchback cars. The first step addresses the problem of accurate measurement and prediction of fuel economy using time series driving data, and the second step detects abnormal fuel economy in relation to contextual information. Long short-term memory autoencoder method learns and uses the most salient features of time series data to build a regression model. The contextual anomaly is detected by following two approaches, kernel quantile estimator and one-class support vector machine. The kernel quantile estimator sets dynamic threshold for detecting anomalous behaviour. Any error beyond a threshold is classified as an anomaly. The one-class support vector machine learns training error pattern and applies the model to test data for anomaly detection. The two-step ML pipeline is further modified by replacing long short term memory autoencoder with gated recurrent network autoencoder, and the performance of both models is compared. The speed recommendations and feedback are issued to the driver based on detected anomalies for controlling aggressive behaviour.FindingsA composite long short-term memory autoencoder was compared with gated recurrent unit autoencoder. Both models achieve prediction accuracy within a range of 98%–100% for prediction as a first step. Recall and accuracy metrics for anomaly detection using kernel quantile estimator remains within 98%–100%, whereas the one-class support vector machine approach performs within the range of 99.3%–100%.Research limitations/implicationsThe proposed approach does not consider socio-demographics or physiological information of drivers due to privacy concerns. However, it can be extended to correlate driver's physiological state such as fatigue, sleep and stress to correlate with driving behaviour and fuel economy. The anomaly detection approach here is limited to providing feedback to driver, it can be extended to give contextual feedback to the steering controller or throttle controller. In the future, a controller-based system can be associated with an anomaly detection approach to control the acceleration and braking action of the driver.Practical implicationsThe suggested approach is helpful in monitoring and reinforcing fuel-economical driving behaviour among fleet drivers as per different environmental contexts. It can also be used as a training tool for improving driving efficiency for new drivers. It keeps drivers engaged positively by issuing a relevant warning for significant contextual anomalies and avoids issuing a warning for minor operational errors.Originality/valueThis paper contributes to the existing literature by providing an ML pipeline approach to track and monitor instantaneous fuel economy rather than relying on average fuel economy values. The approach is further extended to detect contextual driving behaviour anomalies and optimises fuel economy. The main contributions for this approach are as follows: (1) a prediction model is applied to fine-grained time series driving data to predict instantaneous fuel consumption. (2) Anomalous fuel economy is detected by comparing prediction error against a threshold and analysing error patterns based on contextual information.


2020 ◽  
Vol 16 (3) ◽  
pp. 265-280
Author(s):  
Toshiki Tomihira ◽  
Atsushi Otsuka ◽  
Akihiro Yamashita ◽  
Tetsuji Satoh

Purpose Recently, Unicode has been standardized with the penetration of social networking services, the use of emojis has become common. Emojis, as they are also known, are most effective in expressing emotions in sentences. Sentiment analysis in natural language processing manually labels emotions for sentences. The authors can predict sentiment using emoji of text posted on social media without labeling manually. The purpose of this paper is to propose a new model that learns from sentences using emojis as labels, collecting English and Japanese tweets from Twitter as the corpus. The authors verify and compare multiple models based on attention long short-term memory (LSTM) and convolutional neural networks (CNN) and Bidirectional Encoder Representations from Transformers (BERT). Design/methodology/approach The authors collected 2,661 kinds of emoji registered as Unicode characters from tweets using Twitter application programming interface. It is a total of 6,149,410 tweets in Japanese. First, the authors visualized a vector space produced by the emojis by Word2Vec. In addition, the authors found that emojis and similar meaning words of emojis are adjacent and verify that emoji can be used for sentiment analysis. Second, it involves entering a line of tweets containing emojis, learning and testing with that emoji as a label. The authors compared the BERT model with the conventional models [CNN, FastText and Attention bidirectional long short-term memory (BiLSTM)] that were high scores in the previous study. Findings Visualized the vector space of Word2Vec, the authors found that emojis and similar meaning words of emojis are adjacent and verify that emoji can be used for sentiment analysis. The authors obtained a higher score with BERT models compared to the conventional model. Therefore, the sophisticated experiments demonstrate that they improved the score over the conventional model in two languages. General emoji prediction is greatly influenced by context. In addition, the score may be lowered due to a misunderstanding of meaning. By using BERT based on a bi-directional transformer, the authors can consider the context. Practical implications The authors can find emoji in the output words by typing a word using an input method editor (IME). The current IME only considers the most latest inputted word, although it is possible to recommend emojis considering the context of the inputted sentence in this study. Therefore, the research can be used to improve IME performance in the future. Originality/value In the paper, the authors focus on multilingual emoji prediction. This is the first attempt of comparison at emoji prediction between Japanese and English. In addition, it is also the first attempt to use the BERT model based on the transformer for predicting limited emojis although the transformer is known to be effective for various NLP tasks. The authors found that a bidirectional transformer is suitable for emoji prediction.


Author(s):  
Sotetsu Suzugamine ◽  
◽  
Takeru Aoki ◽  
Keiki Takadama ◽  
Hiroyuki Sato

The cortical learning algorithm (CLA) is a type of time-series data prediction algorithm based on the human neocortex. CLA uses multiple columns to represent an input data value at a timestep, and each column has multiple cells to represent the time-series context of the input data. In the conventional CLA, the numbers of columns and cells are user-defined parameters. These parameters depend on the input data, which can be unknown before learning. To avoid the necessity for setting these parameters beforehand, in this work, we propose a self-structured CLA that dynamically adjusts the numbers of columns and cells according to the input data. The experimental results using the time-series test inputs of a sine wave, combined sine wave, and logistic map data demonstrate that the proposed self-structured algorithm can dynamically adjust the numbers of columns and cells depending on the input data. Moreover, the prediction accuracy is higher than those of the conventional long short-term memory and CLAs with various fixed numbers of columns and cells. Furthermore, the experimental results on a multistep prediction of real-world power consumption show that the proposed self-structured CLA achieves a higher prediction accuracy than the conventional long short-term memory.


2017 ◽  
Vol 37 (3) ◽  
pp. 369-378 ◽  
Author(s):  
Du-Xin Liu ◽  
Xinyu Wu ◽  
Wenbin Du ◽  
Can Wang ◽  
Chunjie Chen ◽  
...  

Purpose The purpose of this paper is to model and predict suitable gait trajectories of lower-limb exoskeleton for wearer during rehabilitation walking. Lower-limb exoskeleton is widely used for assisting walk in rehabilitation field. One key problem for exoskeleton control is to model and predict suitable gait trajectories for wearer. Design/methodology/approach In this paper, the authors propose a Deep Spatial-Temporal Model (DSTM) for generating knee joint trajectory of lower-limb exoskeleton, which first leverages Long-Short Term Memory framework to learn the inherent spatial-temporal correlations of gait features. Findings With DSTM, the pathological knee joint trajectories can be predicted based on subject’s other joints. The energy expenditure is adopted for verifying the effectiveness of new recovery gait pattern by monitoring dynamic heart rate. The experimental results demonstrate that the subjects have less energy expenditure in new recovery gait pattern than in others’ normal gait patterns, which also means the new recovery gait is more suitable for subject. Originality/value Long-Short Term Memory framework is first used for modeling rehabilitation gait, and the deep spatial–temporal relationships between joints of gait data can obtained successfully.


Sign in / Sign up

Export Citation Format

Share Document