Study on the SDN-IP-based solution of well-known bottleneck problems in private sector of national R&E network for big data transfer

2017 ◽  
Vol 30 (1) ◽  
pp. e4365 ◽  
Author(s):  
Hyoungwoo Park ◽  
Buseung Cho ◽  
Il-sun Hwang ◽  
Jongsuk Ruth Lee

2020 ◽  
Vol 22 (2) ◽  
pp. 130-144
Author(s):  
Aiqin Hou ◽  
Chase Qishi Wu ◽  
Liudong Zuo ◽  
Xiaoyang Zhang ◽  
Tao Wang ◽  
...  


Author(s):  
Benjamin Sliwa ◽  
Rick Adam ◽  
Christian Wietfeld
Keyword(s):  
Big Data ◽  


2018 ◽  
Vol 8 (11) ◽  
pp. 2216
Author(s):  
Jiahui Jin ◽  
Qi An ◽  
Wei Zhou ◽  
Jiakai Tang ◽  
Runqun Xiong

Network bandwidth is a scarce resource in big data environments, so data locality is a fundamental problem for data-parallel frameworks such as Hadoop and Spark. This problem is exacerbated in multicore server-based clusters, where multiple tasks running on the same server compete for the server’s network bandwidth. Existing approaches solve this problem by scheduling computational tasks near the input data and considering the server’s free time, data placements, and data transfer costs. However, such approaches usually set identical values for data transfer costs, even though a multicore server’s data transfer cost increases with the number of data-remote tasks. Eventually, this hampers data-processing time, by minimizing it ineffectively. As a solution, we propose DynDL (Dynamic Data Locality), a novel data-locality-aware task-scheduling model that handles dynamic data transfer costs for multicore servers. DynDL offers greater flexibility than existing approaches by using a set of non-decreasing functions to evaluate dynamic data transfer costs. We also propose online and offline algorithms (based on DynDL) that minimize data-processing time and adaptively adjust data locality. Although DynDL is NP-complete (nondeterministic polynomial-complete), we prove that the offline algorithm runs in quadratic time and generates optimal results for DynDL’s specific uses. Using a series of simulations and real-world executions, we show that our algorithms are 30% better than algorithms that do not consider dynamic data transfer costs in terms of data-processing time. Moreover, they can adaptively adjust data localities based on the server’s free time, data placement, and network bandwidth, and schedule tens of thousands of tasks within subseconds or seconds.



2021 ◽  
pp. 45-64
Author(s):  
Petra Molnar

AbstractPeople on the move are often left out of conversations around technological development and become guinea pigs for testing new surveillance tools before bringing them to the wider population. These experiments range from big data predictions about population movements in humanitarian crises to automated decision-making in immigration and refugee applications to AI lie detectors at European airports. The Covid-19 pandemic has seen an increase of technological solutions presented as viable ways to stop its spread. Governments’ move toward biosurveillance has increased tracking, automated drones, and other technologies that purport to manage migration. However, refugees and people crossing borders are disproportionately targeted, with far-reaching impacts on various human rights. Drawing on interviews with affected communities in Belgium and Greece in 2020, this chapter explores how technological experiments on refugees are often discriminatory, breach privacy, and endanger lives. Lack of regulation of such technological experimentation and a pre-existing opaque decision-making ecosystem creates a governance gap that leaves room for far-reaching human rights impacts in this time of exception, with private sector interest setting the agenda. Blanket technological solutions do not address the root causes of displacement, forced migration, and economic inequality – all factors exacerbating the vulnerabilities communities on the move face in these pandemic times.



Author(s):  
Rhoda Joseph

This chapter examines the use of big data in the public sector. The public sector pertains to government-related activities. The specific context in this chapter looks at the use of big data at the country level, also described as the federal level. Conceptually, data is processed through a “knowledge pyramid” where data is used to generate information, information generates knowledge, and knowledge begets wisdom. Using this theoretical backdrop, this chapter presents an extension of this model and proposes that the next stage in the pyramid is vision. Vision describes a future plan for the government agency or business, based on the current survey of the organization's environment. To develop these concepts, the use of big data is examined in three different countries. Both opportunities and challenges are outlined, with recommendations for the future. The concepts examined in this chapter are within the constraints of the public sector, but may also be applied to private sector initiatives pertaining to big data.



2019 ◽  
Vol E102.D (8) ◽  
pp. 1478-1488
Author(s):  
Eun-Sung JUNG ◽  
Si LIU ◽  
Rajkumar KETTIMUTHU ◽  
Sungwook CHUNG




Author(s):  
Brian Tierney ◽  
Ezra Kissel ◽  
Martin Swany ◽  
Eric Pouyoul


Data ◽  
2019 ◽  
Vol 4 (4) ◽  
pp. 136 ◽  
Author(s):  
Yuriy Zaporozhets ◽  
Artem Ivanov ◽  
Yuriy Kondratenko

According to the principles of multiphysical, multiscale simulation of phenomena and processes which take place during the electric current treatment of liquid metals, the need to create an adjustable and concise geometrical platform for the big database computing of mathematical models and simulations is justified. In this article, a geometrical platform was developed based on approximation of boundary contours using arcs for application of the integral equations method and matrix transformations. This method achieves regular procedures using multidimensional scale matrices for big data transfer and computing. The efficiency of this method was verified by computer simulation and used for different model contours, which are parts of real contours. The obtained results showed that the numerical algorithm was highly accurate based on the presented geometrical platform of big database computing and that it possesses a potential ability for use in the organization of computational processes regarding the modeling and simulation of electromagnetic, thermal, hydrodynamic, wave, and mechanical fields (as a practical case in metal melts treated by electric current). The efficiency of this developed approach for big data matrices computing and equation system formation was displayed, as the number of numerical procedures, as well as the time taken to perform them, were much smaller when compared to the finite element method used for the same model contours.



Sign in / Sign up

Export Citation Format

Share Document