Resource Allocation in Orthogonal Frequency Division Multiple Access-Long Term Evaluation: Neural Network

2019 ◽  
Vol 16 (12) ◽  
pp. 5026-5031
Author(s):  
Kethavath Narender ◽  
C. Puttamadappa

Symmetrical Frequency Division Multiple Access (OFDMA) is utilized in the higher rate Wireless Communication Systems (WCSs). In the correspondence framework, a fem to cell is a little cell in building Base Station (BS), which devours less power, short range, and works in a minimal effort. The fem to cell has little separation among sender and recipient that give higher flag quality. In spite of the favorable position in fem to cell systems, there win critical difficulties in Interference Management. Specifically, impedance between the macro cell and fem to cell turns into the fundamental issue in OFDMA-Long Term Evaluation (OFDMA-LTE) framework. In this paper, the Neural Network and Hybrid Bee Colony and Cuckoo Search based Resource Allocation (NN-HBCCS-RA) in OFDMA-LTE framework is presented. The ideal power esteems are refreshed to dispense every one of the clients in the fem to cell and large scale cell. The NN-HBCCS strategy accomplished low Signal to Interference Noise Ratio (SINR), otherworldly proficiency and high throughput contrasted with customary techniques.

2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Tao Wang ◽  
Chao Ma ◽  
Yanzan Sun ◽  
Shunqing Zhang ◽  
Yating Wu

This paper studies the energy efficiency (EE) maximization for an orthogonal frequency division multiple access (OFDMA) downlink network aided by a relay station (RS) with subcarrier pairing. A highly flexible transmission protocol is considered, where each transmission is executed in two time slots. Every subcarrier in each slot can either be used in direct mode or be paired with a subcarrier in another slot to operate in relay mode. The resource allocation (RA) in such a network is highly complicated, because it has to determine the operation mode of subcarriers, the assignment of subcarriers to users, and the power allocation of the base station and RS. We first propose a mathematical description of the RA strategy. Then, a RA algorithm is derived to find the globally optimum RA to maximize the EE. Finally, we present extensive numerical results to show the impact of minimum required rate of the network, the user number, and the relay position on the maximum EE of the network.


Author(s):  
. Geetanjli

The power control in CDMA systems, grant numerous users to share resources of the system uniformly between each other, leading to expand capacity. With convenient power control, capacity of CDMA system is immense in contrast of frequency division multiple access (FDMA) and time division multiple access (TDMA). If power control is not achieved numerous problems such as the near-far effect will start to monopolize and consequently will reduce the capacity of the CDMA system. However, when the power control in CDMA systems is implemented, it allows numerous users to share resources of the system uniformly between themselves, leading to increased capacity For power control in CDMA system optimization algorithms i.e. genetic algorithm & particle swarm algorithm can be used which regulate a convenient power vector. These power vector or power levels are dogged at the base station and announce to mobile units to alter their transmitting power in accordance to these levels. The performances of the algorithms are inspected through both analysis and computer simulations, and compared with well-known algorithms from the literature.


2021 ◽  
Author(s):  
Shuo Zhang ◽  
Shuo Shi ◽  
Tianming Feng ◽  
Xuemai Gu

Abstract Unmanned aerial vehicles (UAVs) have been widely used in communication systems due to excellent maneuverability and mobility. The ultra-high speed, ultra-low latency, and ultra-high reliability of 5th generation wireless systems (5G) have further promoted vigorous development of UAVs. Compared with traditional means of communication, UAV can provide services for ground terminal without time and space constraints, so it is often used as air base station (BS). Especially in emergency communications and rescue, it provides temporary communication signal coverage service for disaster areas. In the face of large-scale and scattered user coverage tasks, UAV's trajectory is an important factor affecting its energy consumption and communication performance. In this paper, we consider a UAV emergency communication network where UAV aims to achieve complete coverage of potential underlying D2D users (DUs). The trajectory planning problem is transformed into the deployment and connection problem of stop points (SPs). Aiming at trajectory length and sum throughput, two trajectory planning algorithms based on K-means are proposed. Due to the non-convexity of sum throughput optimization, we present a sub-optimal solution by using the successive convex approximation (SCA) method. In order to balance the relationship between trajectory length and sum throughput, we propose a joint evaluation index which is used as an objective function to further optimize trajectory. Simulation results show the validity of the proposed algorithms which have advantages over the well-known benchmark scheme in terms of trajectory length and sum throughput.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1844
Author(s):  
Minhoe Kim ◽  
Woongsup Lee ◽  
Dong-Ho Cho

In this paper, we investigate a deep learning based resource allocation scheme for massive multiple-input-multiple-output (MIMO) communication systems, where a base station (BS) with a large scale antenna array communicates with a user equipment (UE) using beamforming. In particular, we propose Deep Scanning, in which a near-optimal beamforming vector can be found based on deep Q-learning. Through simulations, we confirm that the optimal beam vector can be found with a high probability. We also show that the complexity required to find the optimum beam vector can be reduced significantly in comparison with conventional beam search schemes.


Author(s):  
Chunyi Wu ◽  
Gaochao Xu ◽  
Yan Ding ◽  
Jia Zhao

Large-scale tasks processing based on cloud computing has become crucial to big data analysis and disposal in recent years. Most previous work, generally, utilize the conventional methods and architectures for general scale tasks to achieve tons of tasks disposing, which is limited by the issues of computing capability, data transmission, etc. Based on this argument, a fat-tree structure-based approach called LTDR (Large-scale Tasks processing using Deep network model and Reinforcement learning) has been proposed in this work. Aiming at exploring the optimal task allocation scheme, a virtual network mapping algorithm based on deep convolutional neural network and [Formula: see text]-learning is presented herein. After feature extraction, we design and implement a policy network to make node mapping decisions. The link mapping scheme can be attained by the designed distributed value-function based reinforcement learning model. Eventually, tasks are allocated onto proper physical nodes and processed efficiently. Experimental results show that LTDR can significantly improve the utilization of physical resources and long-term revenue while satisfying task requirements in big data.


2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Gábor Fodor

Device-to-device (D2D) communications in cellular spectrum have the potential of increasing the spectral and energy efficiency by taking advantage of the proximity and reuse gains. Although several resource allocation (RA) and power control (PC) schemes have been proposed in the literature, a comparison of the performance of such algorithms as a function of the available channel state information has not been reported. In this paper, we examine which large scale channel gain knowledge is needed by practically viable RA and PC schemes for network assisted D2D communications. To this end, we propose a novel near-optimal and low-complexity RA scheme that can be advantageously used in tandem with the optimal binary power control scheme and compare its performance with three heuristics-based RA schemes that are combined either with the well-known 3GPP Long-Term Evolution open-loop path loss compensating PC or with an iterative utility optimal PC scheme. When channel gain knowledge about the useful as well as interfering (cross) channels is available at the cellular base station, the near-optimal RA scheme, termed Matching, combined with the binary PC scheme is superior. Ultimately, we find that the proposed low-complexity RA + PC tandem that uses some cross-channel gain knowledge provides superior performance.


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 912
Author(s):  
Minjoong Rim ◽  
Chung Kang

One of the key requirements for next generation wireless or cellular communication systems is to efficiently support a large number of connections for Internet of Things (IoT) applications, and uplink non-orthogonal multiple access (NOMA) schemes can be used for this purpose. In uplink NOMA systems, pilot symbols, as well as data symbols can be superimposed onto shared resources. The error rate performance can be severely degraded due to channel estimation errors, especially when the number of superimposed packets is large. In this paper, we discuss uplink NOMA schemes with channel estimation errors, assuming that quadrature phase shift keying (QPSK) modulation is used. When pilot signals are superimposed onto the shared resources and a large number of devices perform random accesses concurrently to a single resource of the base station, the channels might not be accurately estimated even in high SNR environments. In this paper, we propose an uplink NOMA scheme, which can alleviate the performance degradation due to channel estimation errors.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1397
Author(s):  
Yishi Xue ◽  
Bo Xu ◽  
Wenchao Xia ◽  
Jun Zhang ◽  
Hongbo Zhu

Driven by its agile maneuverability and deployment, the unmanned aerial vehicle (UAV) becomes a potential enabler of the terrestrial networks. In this paper, we consider downlink communications in a UAV-assisted wireless communication network, where a multi-antenna UAV assists the ground base station (GBS) to forward signals to multiple user equipments (UEs). The UAV is associated with the GBS through in-band wireless backhaul, which shares the spectrum resource with the access links between UEs and the UAV. The optimization problem is formulated to maximize the downlink ergodic sum-rate by jointly optimizing UAV placement, spectrum resource allocation and transmit power matrix of the UAV. The deterministic equivalents of UE’s achievable rate and backhaul capacity are first derived by utilizing large-dimensional random matrix theory, in which, only the slowly varying large-scale channel state information is required. An approximation problem of the joint optimization problem is then introduced based on the deterministic equivalents. Finally, an algorithm is proposed to obtain the optimal solution of the approximate problem. Simulation results are provided to validate the accuracy of the deterministic equivalents, and the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document