SDN-based bandwidth scheduling for prioritized data transfer between data centers

2021 ◽  
Author(s):  
Aiqin Hou ◽  
Chase Q. Wu ◽  
Qiang Duan ◽  
Dawei Quan ◽  
Liudong Zuo ◽  
...  
2020 ◽  
Vol 22 (2) ◽  
pp. 130-144
Author(s):  
Aiqin Hou ◽  
Chase Qishi Wu ◽  
Liudong Zuo ◽  
Xiaoyang Zhang ◽  
Tao Wang ◽  
...  

2019 ◽  
Vol 214 ◽  
pp. 07007
Author(s):  
Petr Fedchenkov ◽  
Andrey Shevel ◽  
Sergey Khoruzhnikov ◽  
Oleg Sadov ◽  
Oleg Lazo ◽  
...  

ITMO University (ifmo.ru) is developing the cloud of geographically distributed data centres. The geographically distributed means data centres (DC) located in different places far from each other by hundreds or thousands of kilometres. Usage of the geographically distributed data centres promises a number of advantages for end users such as opportunity to add additional DC and service availability through redundancy and geographical distribution. Services like data transfer, computing, and data storage are provided to users in the form of virtual objects including virtual machines, virtual storage, virtual data transfer link.


2017 ◽  
Vol 85 ◽  
pp. 47-55 ◽  
Author(s):  
Aiqin Hou ◽  
Chase Q. Wu ◽  
Dingyi Fang ◽  
Yongqiang Wang ◽  
Meng Wang

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Alireza Chamkoori ◽  
Serajdean Katebi

Storing extensive data in cloud environments affects service quality, transmission speed, and access to information in systems, which is becoming a growing challenge. In storage improvement, reducing various costs and reducing the shortest path in the storage of distributed cloud data centers are among the important issues in the field of cloud computing. In this paper, particle swarm optimization (PSO) algorithm and learning automaton (LA) are used to minimize the cost of a data center, which includes communication, data transfer, and storage and optimization of communication between data centers. To improve storage in distributed data centers, a new model called LAPSO is proposed by combining LA and PSO, in which LA improves particle control by searching for particle speed and position. In this method, LA moves each particle in the direction where it has the best individual and group experiences. In multipeak problems, it does not fall into local optimums. Results of the experiments are shown on the dataset of spatial information and cadastre of country lands, which includes 13 data centers. The proposed method evaluates and improves the optimal position parameters, minimum route cost, distance, data transfer cost, storage cost, data communication cost, load balance, and access performance better than other methods.


Author(s):  
Marcelo Amaral ◽  
Jordà Polo ◽  
David Carrera ◽  
Nelson Gonzalez ◽  
Chih-Chieh Yang ◽  
...  

AbstractModern applications demand resources at an unprecedented level. In this sense, data-centers are required to scale efficiently to cope with such demand. Resource disaggregation has the potential to improve resource-efficiency by allowing the deployment of workloads in more flexible ways. Therefore, the industry is shifting towards disaggregated architectures, which enables new ways to structure hardware resources in data centers. However, determining the best performing resource provisioning is a complicated task. The optimality of resource allocation in a disaggregated data center depends on its topology and the workload collocation. This paper presents DRMaestro, a framework to orchestrate disaggregated resources transparently from the applications. DRMaestro uses a novel flow-network model to determine the optimal placement in multiple phases while employing best-efforts on preventing workload performance interference. We first evaluate the impact of disaggregation regarding the additional network requirements under higher network load. The results show that for some applications the impact is minimal, but other ones can suffer up to 80% slowdown in the data transfer part. After that, we evaluate DRMaestro via a real prototype on Kubernetes and a trace-driven simulation. The results show that DRMaestro can reduce the total job makespan with a speedup of up to ≈1.20x and decrease the QoS violation up to ≈2.64x comparing with another orchestrator that does not support resource disaggregation.


2020 ◽  
Vol 8 (4) ◽  
pp. 1189-1198 ◽  
Author(s):  
Abdulsalam Yassine ◽  
Ali Asghar Nazari Shirehjini ◽  
Shervin Shirmohammadi

Author(s):  
Jinal Panchal

From the strong based foundation of storage area network infrastructure in the long run Industry while the evolvement of the technology towards better, faster, agiled deployment of architecture in data center leads to gaining the acceptance of the converged system which takes accountant of complexity, space, power consumption or different parameters beyond the costs. For the intensive need of data transfer, new hardware and software refurbishment is implemented for more powerful storage systems and to come up with green storage, data centers with higher outgrowing application needs are targeted for energy saving and bound utilization of resources involved.


2020 ◽  
Vol 10 (21) ◽  
pp. 7586
Author(s):  
Jose E. Lozano-Rizk ◽  
Juan I. Nieto-Hipolito ◽  
Raul Rivera-Rodriguez ◽  
Maria A. Cosio-Leon ◽  
Mabel Vazquez-Briseño ◽  
...  

When Internet of Things (IoT) big data analytics (BDA) require to transfer data streams among software defined network (SDN)-based distributed data centers, the data flow forwarding in the communication network is typically done by an SDN controller using a traditional shortest path algorithm or just considering bandwidth requirements by the applications. In BDA, this scheme could affect their performance resulting in a longer job completion time because additional metrics were not considered, such as end-to-end delay, jitter, and packet loss rate in the data transfer path. These metrics are quality of service (QoS) parameters in the communication network. This research proposes a solution called QoSComm, an SDN strategy to allocate QoS-based data flows for BDA running across distributed data centers to minimize their job completion time. QoSComm operates in two phases: (i) based on the current communication network conditions, it calculates the feasible paths for each data center using a multi-objective optimization method; (ii) it distributes the resultant paths among data centers configuring their openflow Switches (OFS) dynamically. Simulation results show that QoSComm can improve BDA job completion time by an average of 18%.


Sign in / Sign up

Export Citation Format

Share Document