A Cooperative Edge Computing Scheme for Reducing the Cost of Transferring Big Data in 5G Networks

Author(s):  
Bo Yin ◽  
Yu Cheng ◽  
Lin X. Cai ◽  
Xianghui Cao
Author(s):  
А. Прозоров ◽  
Р. Шнырев ◽  
Д. Волков

Стоимость единицы прибыли неуклонно растет, и для бизнеса пришло время задуматься о цифровых платформах, позволяющих успешно конкурировать в борьбе за платежеспособных клиентов. The cost per unit of profit is steadily increasing, and it is time for businesses to think about digital platforms that allow successfully compete for effective demand by joining the ecosystem, using specialization and theoretically unlimited scaling of business processes. One of the architecture options such a platform that connects the clouds, edge computing and 5G / 6G technologies, — hyperscaler.


Symmetry ◽  
2018 ◽  
Vol 10 (11) ◽  
pp. 594 ◽  
Author(s):  
Tri Nguyen ◽  
Tien-Dung Nguyen ◽  
Van Nguyen ◽  
Xuan-Qui Pham ◽  
Eui-Nam Huh

By bringing the computation and storage resources close proximity to the mobile network edge, mobile edge computing (MEC) is a key enabling technology for satisfying the Internet of Vehicles (IoV) infotainment applications’ requirements, e.g., video streaming service (VSA). However, the explosive growth of mobile video traffic brings challenges for video streaming providers (VSPs). One known issue is that a huge traffic burden on the vehicular network leads to increasing VSP costs for providing VSA to mobile users (i.e., autonomous vehicles). To address this issue, an efficient resource sharing scheme between underutilized vehicular resources is a promising solution to reduce the cost of serving VSA in the vehicular network. Therefore, we propose a new VSA model based on the lower cost of obtaining data from vehicles and then minimize the VSP’s cost. By using existing data resources from nearby vehicles, our proposal can reduce the cost of providing video service to mobile users. Specifically, we formulate our problem as mixed integer nonlinear programming (MINP) in order to calculate the total payment of the VSP. In addition, we introduce an incentive mechanism to encourage users to rent its resources. Our solution represents a strategy to optimize the VSP serving cost under the quality of service (QoS) requirements. Simulation results demonstrate that our proposed mechanism is possible to achieve up to 21% and 11% cost-savings in terms of the request arrival rate and vehicle speed, in comparison with other existing schemes, respectively.


Author(s):  
Luiz Angelo Steffenel ◽  
Manuele Kirsch Pinheiro ◽  
Lucas Vaz Peres ◽  
Damaris Kirsch Pinheiro

The exponential dissemination of proximity computing devices (smartphones, tablets, nanocomputers, etc.) raises important questions on how to transmit, store and analyze data in networks integrating those devices. New approaches like edge computing aim at delegating part of the work to devices in the “edge” of the network. In this article, the focus is on the use of pervasive grids to implement edge computing and leverage such challenges, especially the strategies to ensure data proximity and context awareness, two factors that impact the performance of big data analyses in distributed systems. This article discusses the limitations of traditional big data computing platforms and introduces the principles and challenges to implement edge computing over pervasive grids. Finally, using CloudFIT, a distributed computing platform, the authors illustrate the deployment of a real geophysical application on a pervasive network.


2016 ◽  
Author(s):  
Jonathan Mellon

This chapter discusses the use of large quantities of incidentallycollected data (ICD) to make inferences about politics. This type of datais sometimes referred to as “big data” but I avoid this term because of itsconflicting definitions (Monroe, 2012; Ward & Barker, 2013). ICD is datathat was created or collected primarily for a purpose other than analysis.Within this broad definition, this chapter focuses particularly on datagenerated through user interactions with websites. While ICD has beenaround for at least half a century, the Internet greatly expanded theavailability and reduced the cost of ICD. Examples of ICD include data onInternet searches, social media data, and user data from civic platforms.This chapter briefly explains some sources and uses of ICD and thendiscusses some of the potential issues of analysis and interpretation thatarise when using ICD, including the different approaches to inference thatresearchers can use.


2019 ◽  
Author(s):  
soumya banerjee

Modelling and forecasting port throughput enables stakeholders to make efficient decisions ranging from management of port development, to infrastructure investments, operational restructuring and tariffs policy. Accurate forecasting of port throughput is also critical for long-term resource allocation and short-term strategic planning. In turn, efficient decision-making enhances the competitiveness of a port. However, in the era of big data we are faced with the enviable dilemma of having too much information. We pose the question: is more information always better for forecasting? We suggest that more information comes at the cost of more parameters of the forecasting model that need to be estimated. We comparemultiple forecasting models of varying degrees of complexity and quantify the effect of the amount of data on model forecasting accuracy. Our methodology serves as a guideline for practitioners in this field. We also enjoin caution that even in the era of big data more information may not always be better. It would be advisable for analysts to weigh the costs of adding more data: the ultimate decision would depend on the problem, amount of data and the kind of models being used.


Sign in / Sign up

Export Citation Format

Share Document