Resource allocation algorithm design of high quality of service based on chaotic neural network in wireless communication technology

2017 ◽  
Vol 22 (S5) ◽  
pp. 11005-11017 ◽  
Author(s):  
Yongfeng Cui ◽  
Zhongyuan Zhao ◽  
Yuankun Ma ◽  
Shi Dong
PLoS ONE ◽  
2019 ◽  
Vol 14 (1) ◽  
pp. e0210310 ◽  
Author(s):  
Maharazu Mamman ◽  
Zurina Mohd Hanapi ◽  
Azizol Abdullah ◽  
Abdullah Muhammed

Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1830 ◽  
Author(s):  
Anum Ali ◽  
Ghalib A. Shah ◽  
Junaid Arshad

Resource allocation for machine-type communication (MTC) devices is one of the keys challenges in the 5G network as it affects the lifetime of battery powered devices and also the quality of service of the applications. MTC devices are battery restrained and cannot afford a lot of power consumption due to spectrum usage. In this paper, we propose a novel resource allocation algorithm termed threshold controlled access (TCA) protocol. We propose a novel technique of uplink resource allocation in which the devices make a decision of resource allocation blocks based on their battery status and related application’s power profile that eventually leads to required quality of service (QoS) metric. The first phase of the TCA algorithm selects the number of carriers to be allocated to a certain device for the better lifetime of low power MTC devices. In the second phase, the efficient solution is implemented through inducing a threshold value. A certain value of the threshold is selected through a mapping based on a QoS metric. The threshold enhances the selection of subcarriers for less powered devices, such as small e-health sensors. The algorithm is simulated for the physical layer of the 5G network. Simulation results show that the proposed algorithm is less complex and achieves better performance when compared to existing solutions in the literature.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 932
Author(s):  
Kaiwen Xia ◽  
Jing Feng ◽  
Chao Yan ◽  
Chaofan Duan

The comprehensively completed BDS-3 short-message communication system, known as the short-message satellite communication system (SMSCS), will be widely used in traditional blind communication areas in the future. However, short-message processing resources for short-message satellites are relatively scarce. To improve the resource utilization of satellite systems and ensure the service quality of the short-message terminal is adequate, it is necessary to allocate and schedule short-message satellite processing resources in a multi-satellite coverage area. In order to solve the above problems, a short-message satellite resource allocation algorithm based on deep reinforcement learning (DRL-SRA) is proposed. First of all, using the characteristics of the SMSCS, a multi-objective joint optimization satellite resource allocation model is established to reduce short-message terminal path transmission loss, and achieve satellite load balancing and an adequate quality of service. Then, the number of input data dimensions is reduced using the region division strategy and a feature extraction network. The continuous spatial state is parameterized with a deep reinforcement learning algorithm based on the deep deterministic policy gradient (DDPG) framework. The simulation results show that the proposed algorithm can reduce the transmission loss of the short-message terminal path, improve the quality of service, and increase the resource utilization efficiency of the short-message satellite system while ensuring an appropriate satellite load balance.


Algorithms ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 80
Author(s):  
Qiuqi Han ◽  
Guangyuan Zheng ◽  
Chen Xu

Device-to-Device (D2D) communications, which enable direct communication between nearby user devices over the licensed spectrum, have been considered a key technique to improve spectral efficiency and system throughput in cellular networks (CNs). However, the limited spectrum resources cannot be sufficient to support more cellular users (CUs) and D2D users to meet the growth of the traffic data in future wireless networks. Therefore, Long-Term Evolution-Unlicensed (LTE-U) and D2D-Unlicensed (D2D-U) technologies have been proposed to further enhance system capacity by extending the CUs and D2D users on the unlicensed spectrum for communications. In this paper, we consider an LTE network where the CUs and D2D users are allowed to share the unlicensed spectrum with Wi-Fi users. To maximize the sum rate of all users while guaranteeing each user’s quality of service (QoS), we jointly consider user access and resource allocation. To tackle the formulated problem, we propose a matching-iteration-based joint user access and resource allocation algorithm. Simulation results show that the proposed algorithm can significantly improve system throughput compared to the other benchmark algorithms.


2021 ◽  
Vol 13 (9) ◽  
pp. 1701
Author(s):  
Leonardo Bagaglini ◽  
Paolo Sanò ◽  
Daniele Casella ◽  
Elsa Cattani ◽  
Giulia Panegrossi

This paper describes the Passive microwave Neural network Precipitation Retrieval algorithm for climate applications (PNPR-CLIM), developed with funding from the Copernicus Climate Change Service (C3S), implemented by ECMWF on behalf of the European Union. The algorithm has been designed and developed to exploit the two cross-track scanning microwave radiometers, AMSU-B and MHS, towards the creation of a long-term (2000–2017) global precipitation climate data record (CDR) for the ECMWF Climate Data Store (CDS). The algorithm has been trained on an observational dataset built from one year of MHS and GPM-CO Dual-frequency Precipitation Radar (DPR) coincident observations. The dataset includes the Fundamental Climate Data Record (FCDR) of AMSU-B and MHS brightness temperatures, provided by the Fidelity and Uncertainty in Climate data records from Earth Observation (FIDUCEO) project, and the DPR-based surface precipitation rate estimates used as reference. The combined use of high quality, calibrated and harmonized long-term input data (provided by the FIDUCEO microwave brightness temperature Fundamental Climate Data Record) with the exploitation of the potential of neural networks (ability to learn and generalize) has made it possible to limit the use of ancillary model-derived environmental variables, thus reducing the model uncertainties’ influence on the PNPR-CLIM, which could compromise the accuracy of the estimates. The PNPR-CLIM estimated precipitation distribution is in good agreement with independent DPR-based estimates. A multiscale assessment of the algorithm’s performance is presented against high quality regional ground-based radar products and global precipitation datasets. The regional and global three-year (2015–2017) verification analysis shows that, despite the simplicity of the algorithm in terms of input variables and processing performance, the quality of PNPR-CLIM outperforms NASA GPROF in terms of rainfall detection, while in terms of rainfall quantification they are comparable. The global analysis evidences weaknesses at higher latitudes and in the winter at mid latitudes, mainly linked to the poorer quality of the precipitation retrieval in cold/dry conditions.


Sign in / Sign up

Export Citation Format

Share Document