Analysis of ultra-reliable and low-latency 5G communication for a factory automation use case

Author(s):  
Osman N. C. Yilmaz ◽  
Y.-P. Eric Wang ◽  
Niklas A. Johansson ◽  
Nadia Brahmi ◽  
Shehzad A. Ashraf ◽  
...  
Author(s):  
Nadia Brahmi ◽  
Osman N. C. Yilmaz ◽  
Ke Wang Helmersson ◽  
Shehzad A. Ashraf ◽  
Johan Torsner

2018 ◽  
Vol 16 ◽  
pp. 59-66
Author(s):  
Paul Arnold ◽  
Dirk von Hugo

Abstract. This paper summarizes expectations and requirements towards future converged communication systems denoted by 5th Generation (5G). Multiple research and standardization activities globally contribute to the definition and specification of an Information and Communication Technology (ICT) to provide business customers and residential users with both, existing and future upcoming services which demand for higher data rates and granted performance figures in terms of QoS parameters, such as low latency and high reliability. Representative use case families are threefold and represented as enhanced Mobile Broadband (eMBB), massive Internet of Things (mIoT), and Critical Communication, i.e. Ultra-Low Latency (ULL)/Ultra-High Reliability (UHR). To deploy and operate a dedicated network for each service or use case separately would raise the expenses and service costs to an unduly high amount. Instead provision of a commonly shared physical infrastructure offering resources for transport, processing, and storage of data to several separated logical networks (slices) individually managed and configured by potentially multiple service providers is the main concept of this new approach. Beside a multitude of other initiatives the EU-funded 5G NORMA project (5G Novel Radio Multiservice adaptive network Architecture) has developed an architecture which enables not only network programmability (configurability in software), but also network slicing and Multi Tenancy (allowing independent 3rd parties to offer an end-to-end service tailored according to their needs) in a mobile network. Major aspects dealt with here are the selectable support of mobility (on-demand) and service-aware QoE/QoS (Quality of Experience/Service) control. Specifically we will report on the outcome of the analysis of design criteria for Mobility Management schemes and the result of an exemplary application of the modular mobility function to scenarios with variable service requirements (e.g. high-terminal speed vs. on-demand mobility or portability of devices). An efficient sharing of scarce frequency resources in new radio systems demands for tight coordination of orchestration and assignment (scheduling) of resources for the different network slices as per capacity and priority (QoS) demand. Dynamicity aspects in changing algorithms and schemes to manage, configure, and optimize the resources at the radio base stations according to slice specific Service Level Agreements (SLAs) are investigated. It has been shown that architectural issues in terms of hierarchy (centralized vs. distributed) and layering, i.e. separation of control (signaling) and (user) data plane will play an essential role to increase the elasticity of network infrastructures which is in focus of applying SDN (Software Defined Networking) and NFV (Network Function Virtualization) to next generation communication systems. An outlook towards follow-on standardization and open research questions within different SDOs (Standards Defining Organizations) and recently started cooperative projects concludes the contribution.


Author(s):  
Osama Al-Saadeh ◽  
Kimmo Hiltunen ◽  
Kittipong Kittichokechai ◽  
Alexey Shapin ◽  
Majid Gerami ◽  
...  

Author(s):  
Prakash P ◽  
Darshaun K. G. ◽  
Yaazhlene. P ◽  
Medidhi Venkata Ganesh ◽  
Vasudha B

In Cloud Computing, all the processing of the data collected by the node is done in the central server. This involves a lot of time as data has to be transferred from the node to central server before the processing of data can be done in the server. Also it is not practical to stream terabytes of data from the node to the cloud and back. To overcome these disadvantages, an extension of cloud computing, known as fog computing, is introduced. In this, the processing of data is done completely in the node if the data does not require higher computing power and is done partially if the data requires high computing power, after which the data is transferred to the central server for the remaining computations. This greatly reduces the time involved in the process and is more efficient as the central server is not overloaded. Fog is quite useful in geographically dispersed areas where connectivity can be irregular. The ideal use case requires intelligence near the edge where ultra-low latency is critical, and is promised by fog computing. The concepts of cloud computing and fog computing will be explored and their features will be contrasted to understand which is more efficient and better suited for real-time application.


Author(s):  
Minal Moharir ◽  
Bharat Rahuldhev Patil

The demerits of cloud computing lie in the velocity, bandwidth, and privacy of data. This chapter focuses on why fog computing presents an effective solution to cloud computing. It first explains the primary motivation behind the use of fog computing. Fog computing, in essence, extends the services of the cloud towards the edge of the network (i.e., towards the devices nearer to the customer or the end user). Doing so offers several advantages. Some of the discussed advantages are scalability, low latency, reducing network traffic, and increasing efficiency. The chapter then explains the architecture to implement a fog network, followed by its applications. Some commercial fog products are also discussed, and a use case for an airport security system is presented.


2021 ◽  
Author(s):  
Karthik Krishnegowda ◽  
Elias L. Peter ◽  
Matthias Scheide ◽  
Lara Wimmer ◽  
Rudiger Kays ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document