scholarly journals Actor-Critic Wavelet Neural Network based Scheduler Technique for LTE-Advanced

2020 ◽  
Vol 8 (5) ◽  
pp. 4856-4863

This work presents an efficient and intelligent resource scheduling strategy for the Long Term EvolutionAdvanced (LTE-A) downlink transmission using Reinforcement learning and wavelet neural network. Resource scheduling in LTE-A suffers the problem of uncertainty and accuracy for large scale network. Also the performance of scheduling in conventional methods solely depends upon the scheduling algorithm which was fixed for the entire transmission session. This issue has been addressed and resolved in this paper through Actor-Critic architecture based reinforcement learning to provide the best suited scheduling method out of the rule set for every transmission time interval (TTI) of communication. The actor network will take the decision on scheduling and the critic network will evaluate this decision and update the actor network adaptively through the optimal tuning laws so as to get the desired performance in scheduling. Wavelet neural network(WNN) is derived here by using wavelet function as activation function in place of sigmoid function in conventional neural network to attain better learning capabilities, faster convergence and efficient decision making in scheduling. The actor and critic networks are created through these WNNs and are trained with the LTE parameters dataset. The efficacy of the presented work is evaluated through simulation analysis.

2019 ◽  
Vol 8 (3) ◽  
pp. 3063-3070

This paper presents a novel technique for the efficient resource scheduling for Long Term Evaluation Advanced downlink transmission using wavelet neural network. The dynamism and the uncertainty in the resource scheduling due to the large scale of the network has been taken care through wavelet neural network. The proposed neural network based approach is trained to provide the best scheduling rule at every transmission time interval. Due to the superior estimation capability and better dynamic characteristics than conventional neural network, wavelet neural network offers a better radio resource scheduling. The objective of the proposed scheme is to enhance the system throughput, spectral efficiency and the system capacity. The simulation analysis is performed to verify the effectiveness of the theoretical development.


Author(s):  
Chunyi Wu ◽  
Gaochao Xu ◽  
Yan Ding ◽  
Jia Zhao

Large-scale tasks processing based on cloud computing has become crucial to big data analysis and disposal in recent years. Most previous work, generally, utilize the conventional methods and architectures for general scale tasks to achieve tons of tasks disposing, which is limited by the issues of computing capability, data transmission, etc. Based on this argument, a fat-tree structure-based approach called LTDR (Large-scale Tasks processing using Deep network model and Reinforcement learning) has been proposed in this work. Aiming at exploring the optimal task allocation scheme, a virtual network mapping algorithm based on deep convolutional neural network and [Formula: see text]-learning is presented herein. After feature extraction, we design and implement a policy network to make node mapping decisions. The link mapping scheme can be attained by the designed distributed value-function based reinforcement learning model. Eventually, tasks are allocated onto proper physical nodes and processed efficiently. Experimental results show that LTDR can significantly improve the utilization of physical resources and long-term revenue while satisfying task requirements in big data.


Mathematics ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 298 ◽  
Author(s):  
Shenshen Gu ◽  
Yue Yang

The Max-cut problem is a well-known combinatorial optimization problem, which has many real-world applications. However, the problem has been proven to be non-deterministic polynomial-hard (NP-hard), which means that exact solution algorithms are not suitable for large-scale situations, as it is too time-consuming to obtain a solution. Therefore, designing heuristic algorithms is a promising but challenging direction to effectively solve large-scale Max-cut problems. For this reason, we propose a unique method which combines a pointer network and two deep learning strategies (supervised learning and reinforcement learning) in this paper, in order to address this challenge. A pointer network is a sequence-to-sequence deep neural network, which can extract data features in a purely data-driven way to discover the hidden laws behind data. Combining the characteristics of the Max-cut problem, we designed the input and output mechanisms of the pointer network model, and we used supervised learning and reinforcement learning to train the model to evaluate the model performance. Through experiments, we illustrated that our model can be well applied to solve large-scale Max-cut problems. Our experimental results also revealed that the new method will further encourage broader exploration of deep neural network for large-scale combinatorial optimization problems.


2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Tian Li ◽  
Yongqian Li ◽  
Baogang Li

Smart grid is a potential infrastructure to supply electricity demand for end users in a safe and reliable manner. With the rapid increase of the share of renewable energy and controllable loads in smart grid, the operation uncertainty of smart grid has increased briskly during recent years. The forecast is responsible for the safety and economic operation of the smart grid. However, most existing forecast methods cannot account for the smart grid due to the disabilities to adapt to the varying operational conditions. In this paper, reinforcement learning is firstly exploited to develop an online learning framework for the smart grid. With the capability of multitime scale resolution, wavelet neural network has been adopted in the online learning framework to yield reinforcement learning and wavelet neural network (RLWNN) based adaptive learning scheme. The simulations on two typical prediction problems in smart grid, including wind power prediction and load forecast, validate the effectiveness and the scalability of the proposed RLWNN based learning framework and algorithm.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Jian Sun ◽  
Jie Li

The large scale, time varying, and diversification of physically coupled networked infrastructures such as power grid and transportation system lead to the complexity of their controller design, implementation, and expansion. For tackling these challenges, we suggest an online distributed reinforcement learning control algorithm with the one-layer neural network for each subsystem or called agents to adapt the variation of the networked infrastructures. Each controller includes a critic network and action network for approximating strategy utility function and desired control law, respectively. For avoiding a large number of trials and improving the stability, the training of action network introduces supervised learning mechanisms into reduction of long-term cost. The stability of the control system with learning algorithm is analyzed; the upper bound of the tracking error and neural network weights are also estimated. The effectiveness of our proposed controller is illustrated in the simulation; the results indicate the stability under communication delay and disturbances as well.


2014 ◽  
Vol 584-586 ◽  
pp. 1933-1938
Author(s):  
Hong Yan Wen ◽  
Ji Yuan Hu ◽  
Yuan Jin Pan ◽  
Mei Lin He

Combining with the localized analysis capability of the wavelet and the learning capability of the neural network, this article investigates the wavelet neural network nonlinear time series model and the wavelet neural network architecture integrated model which combines the affine transformation and the rotation transformation ,moreover, based on wavelet neural network algorithm that substitutes the Morlet wavelet function for the Sigmoid function, researchers analyze the application of Wavelet Neural Network in settlement deformation of high-speed rail. The example this paper discussed shows that the wavelet neural network has good results in the deformation prediction, especially in large complicated engineering.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 292 ◽  
Author(s):  
Md Zahangir Alom ◽  
Tarek M. Taha ◽  
Chris Yakopcic ◽  
Stefan Westberg ◽  
Paheding Sidike ◽  
...  

In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bioinformatics, natural language processing, cybersecurity, and many others. This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have discussed recent developments, such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began. Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on DL using neural networks and a survey on Reinforcement Learning (RL). However, those papers have not discussed individual advanced techniques for training large-scale deep learning models and the recently developed method of generative models.


Sign in / Sign up

Export Citation Format

Share Document