scholarly journals Analysis of Computational Complexity and Processing Time Evaluation of the Protocol Stack in 5G New Radio

Author(s):  
Ya. V. Kryukov ◽  
◽  
D. A. Pokamestov ◽  
E. V. Rogozhnikov ◽  
S. A. Novichkov ◽  
...  

Currently, an active deployment of radio access networks for mobile communication systems 5G New Radio is being observed. The architecture of networks is developing rapidly, where significant part of the functions is performed in a virtual cloud space of a personal computer. The computing power of a personal computer must be sufficient to execute network protocols in real time. To reduce the cost of deploying 5G NR networks, the configuration of each remote computer must be optimally matched to the scale of a particular network. Therefore, an urgent direction of research is the assessment of the execution time of the 5G NR protocol stack on various configurations of computers and the development of a mathematical model for data analysis, approximation of dependencies and making recommendations. In this paper, the authors provide an overview of the main 5G NR network architectures, as well as a description of the methods and tools that can be used to estimate the computational complexity of the 5G NR protocol stack. The final section provides an analysis of the computational complexity of the protocol stack, obtained during the experiments by colleagues in partner institutions.

Author(s):  
Stuart McKernan

For many years the concept of quantitative diffraction contrast experiments might have consisted of the determination of dislocation Burgers vectors using a g.b = 0 criterion from several different 2-beam images. Since the advent of the personal computer revolution, the available computing power for performing image-processing and image-simulation calculations is enormous and ubiquitous. Several programs now exist to perform simulations of diffraction contrast images using various approximations. The most common approximations are the use of only 2-beams or a single systematic row to calculate the image contrast, or calculating the image using a column approximation. The increasing amount of literature showing comparisons of experimental and simulated images shows that it is possible to obtain very close agreement between the two images; although the choice of parameters used, and the assumptions made, in performing the calculation must be properly dealt with. The simulation of the images of defects in materials has, in many cases, therefore become a tractable problem.


2021 ◽  
Vol 11 (15) ◽  
pp. 7007
Author(s):  
Janusz P. Paplinski ◽  
Aleksandr Cariow

This article presents an efficient algorithm for computing a 10-point DFT. The proposed algorithm reduces the number of multiplications at the cost of a slight increase in the number of additions in comparison with the known algorithms. Using a 10-point DFT for harmonic power system analysis can improve accuracy and reduce errors caused by spectral leakage. This paper compares the computational complexity for an L×10M-point DFT with a 2M-point DFT.


Author(s):  
Konstantinos Poularakis ◽  
Leandros Tassiulas

A significant portion of today's network traffic is due to recurring downloads of a few popular contents. It has been observed that replicating the latter in caches installed at network edges—close to users—can drastically reduce network bandwidth usage and improve content access delay. Such caching architectures are gaining increasing interest in recent years as a way of dealing with the explosive traffic growth, fuelled further by the downward slope in storage space price. In this work, we provide an overview of caching with a particular emphasis on emerging network architectures that enable caching at the radio access network. In this context, novel challenges arise due to the broadcast nature of the wireless medium, which allows simultaneously serving multiple users tuned into a multicast stream, and the mobility of the users who may be frequently handed off from one cell tower to another. Existing results indicate that caching at the wireless edge has a great potential in removing bottlenecks on the wired backbone networks. Taking into consideration the schedule of multicast service and mobility profiles is crucial to extract maximum benefit in network performance.


2012 ◽  
Vol 239-240 ◽  
pp. 1522-1527
Author(s):  
Wen Bo Wu ◽  
Yu Fu Jia ◽  
Hong Xing Sun

The bottleneck assignment (BA) and the generalized assignment (GA) problems and their exact solutions are explored in this paper. Firstly, a determinant elimination (DE) method is proposed based on the discussion of the time and space complexity of the enumeration method for both BA and GA problems. The optimization algorithm to the pre-assignment problem is then discussed and the adjusting and transformation to the cost matrix is adopted to reduce the computational complexity of the DE method. Finally, a synthesis method for both BA and GA problems is presented. The numerical experiments are carried out and the results indicate that the proposed method is feasible and of high efficiency.


2021 ◽  
Author(s):  
Mircea-Adrian Digulescu

It has long been known that cryptographic schemes offering provably unbreakable security exist, namely the One Time Pad (OTP). The OTP, however, comes at the cost of a very long secret key - as long as the plain-text itself. In this paper we propose an encryption scheme which we (boldly) claim offers the same level of security as the OTP, while allowing for much shorter keys, of size polylogarithmic in the computing power available to the adversary. The Scheme requires a large sequence of truly random words, of length polynomial in the both plain-text size and the logarithm of the computing power the adversary has. We claim that it ensures such an attacker cannot discern the cipher output from random data, except with small probability. We also show how it can be adapted to allow for several plain-texts to be encrypted in the same cipher output, with almost independent keys. Also, we describe how it can be used in lieu of a One Way Function.


2013 ◽  
Vol 2013 ◽  
pp. 1-8
Author(s):  
Konstantinos Koufos ◽  
Riku Jäntti

The key bottleneck for secondary spectrum usage is the aggregate interference to the primary system receivers due to simultaneous secondary transmissions. Existing power allocation algorithms for multiple secondary transmitters in the TV white space either fail to protect the TV service in all cases or they allocate extremely low power levels to some of the transmitters. In this paper, we propose a power allocation algorithm that favors equally the secondary transmitters and it is able to protect the TV service in all cases. When the number of secondary transmitters is high, the computational complexity of the proposed algorithm becomes high too. We show how the algorithm could be modified to reduce its computational complexity at the cost of negligible performance loss. The modified algorithm could permit a spectrum allocation database to allocate near optimal transmit power levels to tens of thousands of secondary transmitters in real time. In addition, we describe how the modified algorithm could be applied to allow decentralized power allocation for mobile secondary transmitters. In that case, the proposed algorithm outperforms the existing algorithms because it allows reducing the communication signalling overhead between mobile secondary transmitters and the spectrum allocation database.


Author(s):  
W. Ostrowski ◽  
K. Hanus

One of the popular uses of UAVs in photogrammetry is providing an archaeological documentation. A wide offer of low-cost (consumer) grade UAVs, as well as the popularity of user-friendly photogrammetric software allowing obtaining satisfying results, contribute to facilitating the process of preparing documentation for small archaeological sites. However, using solutions of this kind is much more problematic for larger areas. The limited possibilities of autonomous flight makes it significantly harder to obtain data for areas too large to be covered during a single mission. Moreover, sometimes the platforms used are not equipped with telemetry systems, which makes navigating and guaranteeing a similar quality of data during separate flights difficult. The simplest solution is using a better UAV, however the cost of devices of such type often exceeds the financial capabilities of archaeological expeditions. <br><br> The aim of this article is to present methodology allowing obtaining data for medium scale areas using only a basic UAV. The proposed methodology assumes using a simple multirotor, not equipped with any flight planning system or telemetry. Navigating of the platform is based solely on live-view images sent from the camera attached to the UAV. The presented survey was carried out using a simple GoPro camera which, from the perspective of photogrammetric use, was not the optimal configuration due to the fish eye geometry of the camera. Another limitation is the actual operational range of UAVs which in the case of cheaper systems, rarely exceeds 1 kilometre and is in fact often much smaller. Therefore the surveyed area must be divided into sub-blocks which correspond to the range of the drone. It is inconvenient since the blocks must overlap, so that they will later be merged during their processing. This increases the length of required flights as well as the computing power necessary to process a greater number of images. <br><br> These issues make prospection highly inconvenient, but not impossible. Our paper presents our experiences through two case studies: surveys conducted in Nepal under the aegis of UNESCO, and works carried out as a part of a Polish archaeological expedition in Cyprus, which both prove that the proposed methodology allows obtaining satisfying results. The article is an important voice in the ongoing debate between commercial and academic archaeologists who discuss the balance between the required standards of conducting archaeological works and economic capabilities of archaeological missions.


Author(s):  
O. Gertsiy

The main characteristics of graphic information compression methods with losses and without losses (RLE, LZW, Huffman's method, DEFLATE, JBIG, JPEG, JPEG 2000, Lossless JPEG, fractal and Wawelet) are analyzed in the article. Effective transmission and storage of images in railway communication systems is an important task now. Because large images require large storage resources. This task has become very important in recent years, as the problems of information transmission by telecommunication channels of the transport infrastructure have become urgent. There is also a great need for video conferencing, where the task is to effectively compress video data - because the greater the amount of data, the greater the cost of transmitting information, respectively. Therefore, the use of image compression methods that reduce the file size is the solution to this task. The study highlights the advantages and disadvantages of compression methods. The comparative analysis the basic possibilities of compression methods of graphic information is carried out. The relevance lies in the efficient transfer and storage of graphical information, as big data requires large resources for storage. The practical significance lies in solving the problem of effectively reducing the data size by applying known compression methods.


2010 ◽  
Vol 8 ◽  
pp. 257-262 ◽  
Author(s):  
C. Mannweiler ◽  
A. Klein ◽  
J. Schneider ◽  
H. D. Schotten

Abstract. The increasing availability of both static and dynamic context information has steadily been driving the development of context-aware communication systems. Adapting system behavior according to current context of the network, the user, and the terminal can yield significant end-to-end performance improvements. In this paper, we present a concept for how to use context information, in particular location information and movement prediction, for Heterogeneous Access Management (HAM). In a first step, we outline the functional architecture of a distributed and extensible context management system (CMS) that defines the roles, tasks, and interfaces of all modules within such a system for large-scale context acquisition and dissemination. In a second step, we depict how the available context information can be exploited for optimizing terminal handover decisions to be made in a multi-RAT (radio access technology) environment. In addition, the utilized method for predicting terminal location as well as the objective functions used for evaluating and comparing system performance are described. Finally, we present preliminary simulation results demonstrating that HAM systems that include current and future terminal context information in the handover decision process clearly outperform conventional systems.


2019 ◽  
Author(s):  
Simon Johansson ◽  
Oleksii Ptykhodko ◽  
Josep Arús-Pous ◽  
Ola Engkvist ◽  
Hongming Chen

In recent years, deep learning for de novo molecular generation has become a rapidly growing research area. Recurrent neural networks (RNN) using the SMILES molecular representation is one of the most common approaches used. Recent study shows that the differentiable neural computer (DNC) can make considerable improvement over the RNN for modeling of sequential data. In the current study, DNC has been implemented as an extension to REINVENT, an RNN-based model that has already been used successfully to make de novo molecular design. The model was benchmarked on its capacity to learn the SMILES language on the GDB-13 and MOSES datasets. The DNC shows improvement on all test cases conducted at the cost of significantly increased computational time and memory consumption.


Sign in / Sign up

Export Citation Format

Share Document