Snowflake Data Cloud Architecture

2021 ◽  
pp. 25-37
Author(s):  
Frank Bell ◽  
Raj Chirumamilla ◽  
Bhaskar B. Joshi ◽  
Bjorn Lindstrom ◽  
Ruchi Soni ◽  
...  
Keyword(s):  
Author(s):  
Istabraq M. Al-Joboury ◽  
Emad H. Al-Hemiary

Fog Computing is a new concept made by Cisco to provide same functionalities of Cloud Computing but near to Things to enhance performance such as reduce delay and response time. Packet loss may occur on single Fog server over a huge number of messages from Things because of several factors like limited bandwidth and capacity of queues in server. In this paper, Internet of Things based Fog-to-Cloud architecture is proposed to solve the problem of packet loss on Fog server using Load Balancing and virtualization. The architecture consists of 5 layers, namely: Things, gateway, Fog, Cloud, and application. Fog layer is virtualized to specified number of Fog servers using Graphical Network Simulator-3 and VirtualBox on local physical server. Server Load Balancing router is configured to distribute the huge traffic in Weighted Round Robin technique using Message Queue Telemetry Transport protocol. Then, maximum message from Fog layer are selected and sent to Cloud layer and the rest of messages are deleted within 1 hour using our proposed Data-in-Motion technique for storage, processing, and monitoring of messages. Thus, improving the performance of the Fog layer for storage and processing of messages, as well as reducing the packet loss to half and increasing throughput to 4 times than using single Fog server.


2021 ◽  
Vol 109 ◽  
pp. 102307
Author(s):  
Anubha Aggarwal ◽  
Neetesh Kumar ◽  
Deo Prakash Vidyarthi ◽  
Rajkumar Buyya
Keyword(s):  

2021 ◽  
Vol 11 (3) ◽  
pp. 923
Author(s):  
Guohua Li ◽  
Joon Woo ◽  
Sang Boem Lim

The complexity of high-performance computing (HPC) workflows is an important issue in the provision of HPC cloud services in most national supercomputing centers. This complexity problem is especially critical because it affects HPC resource scalability, management efficiency, and convenience of use. To solve this problem, while exploiting the advantage of bare-metal-level high performance, container-based cloud solutions have been developed. However, various problems still exist, such as an isolated environment between HPC and the cloud, security issues, and workload management issues. We propose an architecture that reduces this complexity by using Docker and Singularity, which are the container platforms most often used in the HPC cloud field. This HPC cloud architecture integrates both image management and job management, which are the two main elements of HPC cloud workflows. To evaluate the serviceability and performance of the proposed architecture, we developed and implemented a platform in an HPC cluster experiment. Experimental results indicated that the proposed HPC cloud architecture can reduce complexity to provide supercomputing resource scalability, high performance, user convenience, various HPC applications, and management efficiency.


Author(s):  
Rao Krishna Virinchi ◽  
Paulson K Antony ◽  
Shreyaa Saravanan ◽  
V Dhakshain Balaji ◽  
Vaishnavi Suresh Krishnan ◽  
...  
Keyword(s):  

2016 ◽  
Vol 55 ◽  
pp. 266-277 ◽  
Author(s):  
Ahmed Lounis ◽  
Abdelkrim Hadjidj ◽  
Abdelmadjid Bouabdallah ◽  
Yacine Challal

Sign in / Sign up

Export Citation Format

Share Document