Improving Network Security Based on Trust-Aware Routing Protocols Using Long Short-Term Memory-Queuing Segment-Routing Algorithms

Author(s):  
Muthukumaran V. ◽  
V. Vinoth Kumar ◽  
Rose Bindu Joseph ◽  
Meram Munirathanam ◽  
Balajee Jeyakumar

Defending all single connection failures for a particular system, segment routing issue, the switch will focus on the problems of selecting a small subset of trust-aware routing to improve the deep learning (DL). In the end, even if there were multiple path failures, these paths may introduce long-term, unnecessary overload in the proposed long short-term memory networks-based queuing routing segmentation (LSTM-QRS) experience of reducing traffic delays and adjusting traffic length by reducing network bandwidth. The critical factor is a novel traffic repair technique used to create a traffic repair path that switches to software-defined network (SDN) using multiple routing and providing additional flexibility in re-routing using long short-term memory networks (LSTM)-based queuing routing segment (LSTM-QRS) algorithms. It reduces the repair path length and recommends replacing the target-based traffic with the connection-based traffic fault detection router to avoid targeted traffic network congestion.

2018 ◽  
Vol 7 (4.15) ◽  
pp. 25 ◽  
Author(s):  
Said Jadid Abdulkadir ◽  
Hitham Alhussian ◽  
Muhammad Nazmi ◽  
Asim A Elsheikh

Forecasting time-series data are imperative especially when planning is required through modelling using uncertain knowledge of future events. Recurrent neural network models have been applied in the industry and outperform standard artificial neural networks in forecasting, but fail in long term time-series forecasting due to the vanishing gradient problem. This study offers a robust solution that can be implemented for long-term forecasting using a special architecture of recurrent neural network known as Long Short Term Memory (LSTM) model to overcome the vanishing gradient problem. LSTM is specially designed to avoid the long-term dependency problem as their default behavior. Empirical analysis is performed using quantitative forecasting metrics and comparative model performance on the forecasted outputs. An evaluation analysis is performed to validate that the LSTM model provides better forecasted outputs on Standard & Poor’s 500 Index (S&P 500) in terms of error metrics as compared to other forecasting models.  


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 861 ◽  
Author(s):  
Xiangdong Ran ◽  
Zhiguang Shan ◽  
Yufei Fang ◽  
Chuang Lin

Traffic prediction is based on modeling the complex non-linear spatiotemporal traffic dynamics in road network. In recent years, Long Short-Term Memory has been applied to traffic prediction, achieving better performance. The existing Long Short-Term Memory methods for traffic prediction have two drawbacks: they do not use the departure time through the links for traffic prediction, and the way of modeling long-term dependence in time series is not direct in terms of traffic prediction. Attention mechanism is implemented by constructing a neural network according to its task and has recently demonstrated success in a wide range of tasks. In this paper, we propose an Long Short-Term Memory-based method with attention mechanism for travel time prediction. We present the proposed model in a tree structure. The proposed model substitutes a tree structure with attention mechanism for the unfold way of standard Long Short-Term Memory to construct the depth of Long Short-Term Memory and modeling long-term dependence. The attention mechanism is over the output layer of each Long Short-Term Memory unit. The departure time is used as the aspect of the attention mechanism and the attention mechanism integrates departure time into the proposed model. We use AdaGrad method for training the proposed model. Based on the datasets provided by Highways England, the experimental results show that the proposed model can achieve better accuracy than the Long Short-Term Memory and other baseline methods. The case study suggests that the departure time is effectively employed by using attention mechanism.


Author(s):  
Tao Gui ◽  
Qi Zhang ◽  
Lujun Zhao ◽  
Yaosong Lin ◽  
Minlong Peng ◽  
...  

In recent years, long short-term memory (LSTM) has been successfully used to model sequential data of variable length. However, LSTM can still experience difficulty in capturing long-term dependencies. In this work, we tried to alleviate this problem by introducing a dynamic skip connection, which can learn to directly connect two dependent words. Since there is no dependency information in the training data, we propose a novel reinforcement learning-based method to model the dependency relationship and connect dependent words. The proposed model computes the recurrent transition functions based on the skip connections, which provides a dynamic skipping advantage over RNNs that always tackle entire sentences sequentially. Our experimental results on three natural language processing tasks demonstrate that the proposed method can achieve better performance than existing methods. In the number prediction experiment, the proposed model outperformed LSTM with respect to accuracy by nearly 20%.


2020 ◽  
Vol 53 (1) ◽  
pp. 648-653
Author(s):  
Keerthi N Pujari ◽  
Srinivas S Miriyala ◽  
Prateek Mittal ◽  
Kishalay Mitra

Author(s):  
shengli liao ◽  
yitong song ◽  
benxi liu ◽  
zhanwei liu ◽  
zhou fang

Mid-long term inflow forecasting plays an important supporting role in reservoir production planning, drought and flood control, comprehensive utilization and water resource management. Although the inflow data have some periodicity and predictability characteristics, the inflow sequence has complex nonlinearity due to the comprehensive influence of climate, underlying surfaces, human activities and other factors. Therefore, it is difficult to achieve accurate inflow forecasting. In this study, a new hybrid inflow forecast framework that uses previous inflows and monthly factors as inputs, and that adopts Long Short-Term Memory (LSTM) and the Jonckheere-Terpstra test (J-T test) is developed for mid-long term inflow forecasting. First, the J-T test can test whether the monthly average inflow sequence set exhibits significant differences due to climate, underlying surfaces, human activities and other factors to ensure the effectiveness of the framework. Second, the LSTM, which is good at determining the nonlinearity law of the time sequence and finding the best solution, is chosen as the framework algorithm. Finally, due to the periodicity of the inflow sequence, adding monthly factors into the framework can provide more information for the framework to improve the accuracy of the forecast. Xiaowan Hydropower Station in the Lancang River of China is selected as the research area. Six evaluation criteria are used to evaluate established framework using historical monthly inflow data (January 1954-December 2016). The performance of the framework is compared with that of the Back Propagation Neural Network (BPNN) and Support Vector Regression (SVR) models. The results show that the introduction of monthly factors greatly improves the accuracy of the inflow forecast studied, and the proposed method is also better than other frameworks.


2021 ◽  
Author(s):  
Hayrettin Okut

The long short-term memory neural network (LSTM) is a type of recurrent neural network (RNN). During the training of RNN architecture, sequential information is used and travels through the neural network from input vector to the output neurons, while the error is calculated and propagated back through the network to update the network parameters. Information in these networks incorporates loops into the hidden layer. Loops allow information to flow multi-directionally so that the hidden state signifies past information held at a given time step. Consequently, the output is dependent on the previous predictions which are already known. However, RNNs have limited capacity to bridge more than a certain number of steps. Mainly this is due to the vanishing of gradients which causes the predictions to capture the short-term dependencies as information from earlier steps decays. As more layers in RNN containing activation functions are added, the gradient of the loss function approaches zero. The LSTM neural networks (LSTM-ANNs) enable learning long-term dependencies. LSTM introduces a memory unit and gate mechanism to enable capture of the long dependencies in a sequence. Therefore, LSTM networks can selectively remember or forget information and are capable of learn thousands timesteps by structures called cell states and three gates.


Sign in / Sign up

Export Citation Format

Share Document