A Reinforcement Learning Technique for Optimizing Downlink Scheduling in an Energy-Limited Vehicular Network

2017 ◽  
Vol 66 (6) ◽  
pp. 4592-4601 ◽  
Author(s):  
Ribal F. Atallah ◽  
Chadi M. Assi ◽  
Jia Yuan Yu
Author(s):  
Jun Long ◽  
Yueyi Luo ◽  
Xiaoyu Zhu ◽  
Entao Luo ◽  
Mingfeng Huang

AbstractWith the developing of Internet of Things (IoT) and mobile edge computing (MEC), more and more sensing devices are widely deployed in the smart city. These sensing devices generate various kinds of tasks, which need to be sent to cloud to process. Usually, the sensing devices do not equip with wireless modules, because it is neither economical nor energy saving. Thus, it is a challenging problem to find a way to offload tasks for sensing devices. However, many vehicles are moving around the city, which can communicate with sensing devices in an effective and low-cost way. In this paper, we propose a computation offloading scheme through mobile vehicles in IoT-edge-cloud network. The sensing devices generate tasks and transmit the tasks to vehicles, then the vehicles decide to compute the tasks in the local vehicle, MEC server or cloud center. The computation offloading decision is made based on the utility function of the energy consumption and transmission delay, and the deep reinforcement learning technique is adopted to make decisions. Our proposed method can make full use of the existing infrastructures to implement the task offloading of sensing devices, the experimental results show that our proposed solution can achieve the maximum reward and decrease delay.


Author(s):  
Nikhilesh Sharma ◽  
Sen Zhang ◽  
Someshwar Rao Somayajula Venkata ◽  
Filippo Malandra ◽  
Nicholas Mastronarde ◽  
...  

Author(s):  
Ali Fakhry

The applications of Deep Q-Networks are seen throughout the field of reinforcement learning, a large subsect of machine learning. Using a classic environment from OpenAI, CarRacing-v0, a 2D car racing environment, alongside a custom based modification of the environment, a DQN, Deep Q-Network, was created to solve both the classic and custom environments. The environments are tested using custom made CNN architectures and applying transfer learning from Resnet18. While DQNs were state of the art years ago, using it for CarRacing-v0 appears somewhat unappealing and not as effective as other reinforcement learning techniques. Overall, while the model did train and the agent learned various parts of the environment, attempting to reach the reward threshold for the environment with this reinforcement learning technique seems problematic and difficult as other techniques would be more useful.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 2782-2798 ◽  
Author(s):  
Lucileide M. D. Da Silva ◽  
Matheus F. Torquato ◽  
Marcelo A. C. Fernandes

Sign in / Sign up

Export Citation Format

Share Document