scholarly journals Bottleneck Based Gridlock Prediction in an Urban Road Network Using Long Short-Term Memory

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1412
Author(s):  
Ei Ei Mon ◽  
Hideya Ochiai ◽  
Chaiyachet Saivichit ◽  
Chaodit Aswakul

The traffic bottlenecks in urban road networks are more challenging to investigate and discover than in freeways or simple arterial networks. A bottleneck indicates the congestion evolution and queue formation, which consequently disturb travel delay and degrade the urban traffic environment and safety. For urban road networks, sensors are needed to cover a wide range of areas, especially for bottleneck and gridlock analysis, requiring high installation and maintenance costs. The emerging widespread availability of GPS vehicles significantly helps to overcome the geographic coverage and spacing limitations of traditional fixed-location detector data. Therefore, this study investigated GPS vehicles that have passed through the links in the simulated gridlock-looped intersection area. The sample size estimation is fundamental to any traffic engineering analysis. Therefore, this study tried a different number of sample sizes to analyze the severe congestion state of gridlock. Traffic condition prediction is one of the primary components of intelligent transportation systems. In this study, the Long Short-Term Memory (LSTM) neural network was applied to predict gridlock based on bottleneck states of intersections in the simulated urban road network. This study chose to work on the Chula-Sathorn SUMO Simulator (Chula-SSS) dataset. It was calibrated with the past actual traffic data collection by using the Simulation of Urban MObility (SUMO) software. The experiments show that LSTM provides satisfactory results for gridlock prediction with temporal dependencies. The reported prediction error is based on long-range time dependencies on the respective sample sizes using the calibrated Chula-SSS dataset. On the other hand, the low sampling rate of GPS trajectories gives high RMSE and MAE error, but with reduced computation time. Analyzing the percentage of simulated GPS data with different random seed numbers suggests the possibility of gridlock identification and reports satisfying prediction errors.

PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255597
Author(s):  
Abdelrahman Zaroug ◽  
Alessandro Garofolini ◽  
Daniel T. H. Lai ◽  
Kurt Mudie ◽  
Rezaul Begg

The forecasting of lower limb trajectories can improve the operation of assistive devices and minimise the risk of tripping and balance loss. The aim of this work was to examine four Long Short Term Memory (LSTM) neural network architectures (Vanilla, Stacked, Bidirectional and Autoencoders) in predicting the future trajectories of lower limb kinematics, i.e. Angular Velocity (AV) and Linear Acceleration (LA). Kinematics data of foot, shank and thigh (LA and AV) were collected from 13 male and 3 female participants (28 ± 4 years old, 1.72 ± 0.07 m in height, 66 ± 10 kg in mass) who walked for 10 minutes at preferred walking speed (4.34 ± 0.43 km.h-1) and at an imposed speed (5km.h-1, 15.4% ± 7.6% faster) on a 0% gradient treadmill. The sliding window technique was adopted for training and testing the LSTM models with total kinematics time-series data of 10,500 strides. Results based on leave-one-out cross validation, suggested that the LSTM autoencoders is the top predictor of the lower limb kinematics trajectories (i.e. up to 0.1s). The normalised mean squared error was evaluated on trajectory predictions at each time-step and it obtained 2.82–5.31% for the LSTM autoencoders. The ability to predict future lower limb motions may have a wide range of applications including the design and control of bionics allowing improved human-machine interface and mitigating the risk of falls and balance loss.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7333
Author(s):  
Ricardo Petri Silva ◽  
Bruno Bogaz Zarpelão ◽  
Alberto Cano ◽  
Sylvio Barbon Junior

A wide range of applications based on sequential data, named time series, have become increasingly popular in recent years, mainly those based on the Internet of Things (IoT). Several different machine learning algorithms exploit the patterns extracted from sequential data to support multiple tasks. However, this data can suffer from unreliable readings that can lead to low accuracy models due to the low-quality training sets available. Detecting the change point between high representative segments is an important ally to find and thread biased subsequences. By constructing a framework based on the Augmented Dickey-Fuller (ADF) test for data stationarity, two proposals to automatically segment subsequences in a time series were developed. The former proposal, called Change Detector segmentation, relies on change detection methods of data stream mining. The latter, called ADF-based segmentation, is constructed on a new change detector derived from the ADF test only. Experiments over real-file IoT databases and benchmarks showed the improvement provided by our proposals for prediction tasks with traditional Autoregressive integrated moving average (ARIMA) and Deep Learning (Long short-term memory and Temporal Convolutional Networks) methods. Results obtained by the Long short-term memory predictive model reduced the relative prediction error from 1 to 0.67, compared to time series without segmentation.


Author(s):  
Dalila Bouras ◽  
Mohamed Amroune ◽  
Hakim Bendjenna ◽  
Issam Bendib

Objective: One key task of fine-grained opinion mining on product review is to extract product aspects and their corresponding opinion expressed by users. Previous work has demonstrated that precise modeling of opinion targets within the surrounding context can improve performances. However, how to effectively and efficiently learn hidden word semantics and better represent targets and the context still needs to be further studied. Recent years have seen a revival of the long short-term memory (LSTM), with its effectiveness being demonstrated on a wide range of problems. However, LSTM based approaches are still limited to linear data processing since it processes the information sequentially. As a result, they may perform poorly on user-generated texts, such as product reviews, tweets, etc., whose syntactic structure is not precise.To tackle this challenge, <P> Methods: In this research paper, we propose a constituency tree long short term memory neural network-based approach. We compare our model with state-of-the-art baselines on SemEval 2014 datasets. <P> Results: Experiment results show that our models obtain competitive performances compared to various supervised LSTM architectures. <P> Conclusion: Our work contributes to the improvement of state-of-the-art aspect-level opinion mining methods and offers a new approach to support human decision-making process based on opinion mining results.


2021 ◽  
Author(s):  
Claire Brenner ◽  
Jonathan Frame ◽  
Grey Nearing ◽  
Karsten Schulz

&lt;p&gt;Global land-atmosphere energy and carbon fluxes are key drivers of the Earth&amp;#8217;s climate system. Their assessment over a wide range of climates and biomes is therefore essential (i) for a better understanding and characterization of land-atmosphere exchanges and feedbacks and (ii) for examining the effect of climate change on the global water, energy and carbon cycles.&amp;#160;&lt;/p&gt;&lt;p&gt;Large-sample datasets such as the FLUXNET2015 dataset (Pastorello et al., 2020) foster the use of machine learning (ML) techniques as a powerful addition to existing physically-based modelling approaches. Several studies have investigated ML techniques for assessing energy and carbon fluxes, and while across-site variability and the mean seasonal cycle are typically well predicted, deviations from mean seasonal behaviour remains challenging (Tramontana et al., 2016).&amp;#160;&lt;/p&gt;&lt;p&gt;In this study we examine the importance of memory effects for predicting energy and carbon fluxes at half-hourly and daily temporal resolutions. To this end, we train a Long Short-Term Memory (LSTM, Hochreiter and Schmidthuber, 1997), a recurrent neural network with explicit memory, that is particularly suited for time series predictions due to its capability to store information over longer (time) sequences. We train the LSTM on a large number of FLUXNET sites part of the FLUXNET2015 dataset using local meteorological forcings and static site attributes derived from remote sensing and reanalysis data.&amp;#160;&lt;/p&gt;&lt;p&gt;We evaluate model performance out-of-sample (leaving out individual sites) in a 10-fold cross-validation. Additionally, we compare results from the LSTM with results from another ML technique, XGBoost (Chen and Guestrin, 2016), that does not contain system memory. By analysing the differences in model performances of both approaches across various biomes, we investigate under which conditions the inclusion of memory might be beneficial for modelling energy and carbon fluxes.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;&lt;p&gt;References:&lt;/p&gt;&lt;p&gt;Chen, Tianqi, and Carlos Guestrin. &quot;Xgboost: A scalable tree boosting system.&quot; Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. 2016.&lt;/p&gt;&lt;p&gt;Hochreiter, Sepp, and J&amp;#252;rgen Schmidhuber. &quot;Long short-term memory.&quot; Neural computation 9.8 (1997): 1735-1780.&lt;/p&gt;&lt;p&gt;Pastorello, Gilberto, et al. &quot;The FLUXNET2015 dataset and the ONEFlux processing pipeline for eddy covariance data.&quot; Scientific data 7.1 (2020): 1-27&lt;/p&gt;&lt;p&gt;Tramontana, Gianluca, et al. &quot;Predicting carbon dioxide and energy fluxes across global &amp;#160; FLUXNET sites with regression algorithms.&quot; Biogeosciences 13.14 (2016): 4291-4313.&lt;/p&gt;


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Haimin Yang ◽  
Zhisong Pan ◽  
Qing Tao

Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.


2020 ◽  
Author(s):  
Abdolreza Nazemi ◽  
Johannes Jakubik ◽  
Andreas Geyer-Schulz ◽  
Frank J. Fabozzi

Sign in / Sign up

Export Citation Format

Share Document