Multi Objective Resource Scheduling in LTE Networks Using Reinforcement Learning

2012 ◽  
Vol 3 (2) ◽  
pp. 39-57 ◽  
Author(s):  
Ioan Sorin Comsa ◽  
Mehmet Aydin ◽  
Sijing Zhang ◽  
Pierre Kuonen ◽  
Jean–Frédéric Wagen

The use of the intelligent packet scheduling process is absolutely necessary in order to make the radio resources usage more efficient in recent high-bit-rate demanding radio access technologies such as Long Term Evolution (LTE). Packet scheduling procedure works with various dispatching rules with different behaviors. In the literature, the scheduling disciplines are applied for the entire transmission sessions and the scheduler performance strongly depends on the exploited discipline. The method proposed in this paper aims to discuss how a straightforward schedule can be provided within the transmission time interval (TTI) sub-frame using a mixture of dispatching disciplines per TTI instead of a single rule adopted across the whole transmission. This is to maximize the system throughput while assuring the best user fairness. This requires adopting a policy of how to mix the rules and a refinement procedure to call the best rule each time. Two scheduling policies are proposed for how to mix the rules including use of Q learning algorithm for refining the policies. Simulation results indicate that the proposed methods outperform the existing scheduling techniques by maximizing the system throughput without harming the user fairness performance.

2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Chafika Tata ◽  
Nassima Fellag ◽  
Michel Kadoch

The fast evolution of the number of wireless users and the emergence of new multimedia services have motivated third-generation partnership project (3GPP) to develop new radio access technologies. Thus, the carrier aggregation (CA) was introduced from version 10 long-term evolution (LTE), known as long-term evolution-advanced (LTE-A), to meet the increasing demands in terms of throughput and bandwidth and to ensure the Quality of Service (QoS) for different classes of bearers in LTE networks. However, such solution stills inefficient until implementing good resources management scheme. Several scheduling mechanisms have been proposed in the literature, to guarantee the QoS of different classes of bearers in LTE-A and 5G networks. Nevertheless, most of them promote high-priority bearers. In this study, a new approach of uplink scheduling resources has been developed. It aims to ensure service fairness of different traffic classes that allocates bearers over LTE-A and 5G networks. Also, it raises the number of admitted users in the network by increasing the number of admitted bearers through a dynamic management of service priorities. In fact, the low-priority traffic classes, using low-priority bearers, are favoured during a specific time interval, based on the average waiting time for each class. Simulation results show that the QoS parameters were much improved for the low-priority classes without significantly affecting the QoS of high priority ones.


Aerospace ◽  
2021 ◽  
Vol 8 (4) ◽  
pp. 113
Author(s):  
Pedro Andrade ◽  
Catarina Silva ◽  
Bernardete Ribeiro ◽  
Bruno F. Santos

This paper presents a Reinforcement Learning (RL) approach to optimize the long-term scheduling of maintenance for an aircraft fleet. The problem considers fleet status, maintenance capacity, and other maintenance constraints to schedule hangar checks for a specified time horizon. The checks are scheduled within an interval, and the goal is to, schedule them as close as possible to their due date. In doing so, the number of checks is reduced, and the fleet availability increases. A Deep Q-learning algorithm is used to optimize the scheduling policy. The model is validated in a real scenario using maintenance data from 45 aircraft. The maintenance plan that is generated with our approach is compared with a previous study, which presented a Dynamic Programming (DP) based approach and airline estimations for the same period. The results show a reduction in the number of checks scheduled, which indicates the potential of RL in solving this problem. The adaptability of RL is also tested by introducing small disturbances in the initial conditions. After training the model with these simulated scenarios, the results show the robustness of the RL approach and its ability to generate efficient maintenance plans in only a few seconds.


2020 ◽  
pp. 158-161
Author(s):  
Chandraprabha S ◽  
Pradeepkumar G ◽  
Dineshkumar Ponnusamy ◽  
Saranya M D ◽  
Satheesh Kumar S ◽  
...  

This paper outfits artificial intelligence based real time LDR data which is implemented in various applications like indoor lightning, and places where enormous amount of heat is produced, agriculture to increase the crop yield, Solar plant for solar irradiance Tracking. For forecasting the LDR information. The system uses a sensor that can measure the light intensity by means of LDR. The data acquired from sensors are posted in an Adafruit cloud for every two seconds time interval using Node MCU ESP8266 module. The data is also presented on adafruit dashboard for observing sensor variables. A Long short-term memory is used for setting up the deep learning. LSTM module uses the recorded historical data from adafruit cloud which is paired with Node MCU in order to obtain the real-time long-term time series sensor variables that is measured in terms of light intensity. Data is extracted from the cloud for processing the data analytics later the deep learning model is implemented in order to predict future light intensity values.


Author(s):  
Mohd Mueen Ul Islam Mattoo ◽  
Huda Adibah Mohd Ramli

<span lang="EN-GB">The allocation of radio resources is one of the most critical functions performed by the Radio Resource Management (RRM) mechanisms in the downlink Long Term Evolution – Advanced (LTE-Advanced). Packet scheduling concerns itself with allocation of these radio resources in an intelligent manner such that system throughput/capacity can be maximized whilst the required multimedia Quality of Service (QoS) is met. Majority of the previous studies of packet scheduling algorithms for LTE-Advanced did not take the effect of channel impairments into account. However, in real world the channel impairments cannot be obliterated completely and have a direct impact on the packet scheduling performance. As such, this work studies the impact of channel impairments on packet scheduling performance in a practical downlink LTE-Advanced. The simulation results obtained demonstrate the efficacy of RM2 scheduling algorithm over other scheduling algorithms in maximizing the system capacity and is more robust on the effect of the cellular channel impairments.  </span>


Author(s):  
Shafinaz Bt Ismail ◽  
Darmawaty Bt Mohd Ali ◽  
Norsuzila Ya’acob

Scheduling is referring to the process of allocating resources to User Equipment based on scheduling algorithms that is located at the LTE base station. Various algorithms have been proposed as the execution of scheduling algorithm, which represents an open issue in Long Term Evolution (LTE) standard. This paper makes an attempt to study and compare the performance of three well-known uplink schedulers namely, Maximum Throughput (MT), First Maximum Expansion (FME), and Round Robin (RR). The evaluation is considered for a single cell with interference for three flows such as Best effort, Video and VoIP in a pedestrian environment using the LTE-SIM network simulator. The performance evaluation is conducted in terms of system throughput, fairness index, delay and packet loss ratio (PLR). The simulations results show that RR algorithm always reaches the lowest PLR, delivering highest throughput for video and VoIP flows among all those strategies. Thus, RR is the most suitable scheduling algorithm for VoIP and video flows while MT and FME is appropriate for BE flows in LTE networks.


Author(s):  
Danyang Zhang ◽  
Junhui Zhao ◽  
Yang Zhang ◽  
Qingmiao Zhang

Considering the intelligent train control problem in long-term evolution for metro system, a new train-to-train communication-based train control system is proposed, where the cooperative train formation technology is introduced for realizing a more flexible train operation mode. To break the limitation of centralized train control, a pre-exploration-based two-stage deep Q-learning algorithm is adopted in the cooperative train formation, which is one of the first intelligent approaches for urban railway formation control. In addition, a comfort-considered algorithm is given, where optimization measures are taken for providing superior passenger experience. The simulation results illustrate that the optimized algorithm has a smoother jerk curve during the train control process, and the passenger comfort can be improved. Furthermore, the proposed algorithm can effectively accomplish the train control task in the multi-train tracking scenarios, and meet the control requirements of the cooperative formation system.


2009 ◽  
Vol 28 (12) ◽  
pp. 3268-3270
Author(s):  
Chao WANG ◽  
Jing GUO ◽  
Zhen-qiang BAO

Algorithms ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 80
Author(s):  
Qiuqi Han ◽  
Guangyuan Zheng ◽  
Chen Xu

Device-to-Device (D2D) communications, which enable direct communication between nearby user devices over the licensed spectrum, have been considered a key technique to improve spectral efficiency and system throughput in cellular networks (CNs). However, the limited spectrum resources cannot be sufficient to support more cellular users (CUs) and D2D users to meet the growth of the traffic data in future wireless networks. Therefore, Long-Term Evolution-Unlicensed (LTE-U) and D2D-Unlicensed (D2D-U) technologies have been proposed to further enhance system capacity by extending the CUs and D2D users on the unlicensed spectrum for communications. In this paper, we consider an LTE network where the CUs and D2D users are allowed to share the unlicensed spectrum with Wi-Fi users. To maximize the sum rate of all users while guaranteeing each user’s quality of service (QoS), we jointly consider user access and resource allocation. To tackle the formulated problem, we propose a matching-iteration-based joint user access and resource allocation algorithm. Simulation results show that the proposed algorithm can significantly improve system throughput compared to the other benchmark algorithms.


Sign in / Sign up

Export Citation Format

Share Document