scholarly journals Design and implementation of an efficient and programmable future internet testbed in Taiwan

2013 ◽  
Vol 10 (2) ◽  
pp. 825-842
Author(s):  
Jen-Wei Hu ◽  
Chu-Sing Yang ◽  
Te-Lung Liu

Internet has played an important part in the success of information technologies. With the growing and changing demands, there are many limitations faced by current Internet. A number of network testbeds are created for solving a set of specific problems in Internet. Traditionally, these testbeds are lacking of large scale network and flexibility. Therefore, it is necessary to design and implement a testbed which can support wide range of experiments and has the ability of programmable network. Besides, there has been a big change enabled by cloud computing in recent years. Although networking technologies have lagged behind the advances in server virtualization, the networking is still an importance component to interconnect among virtual machines. There are also measurement issues with growing number of virtual machines in the same host. Therefore, we also propose integrating management functions of virtual network in our testbed. In this paper, we design and create a Future Internet testbed in Taiwan over TWAREN Research Network. This testbed evolves into an environment for programmable network and cloud computing. This paper also presents several finished and ongoing experiments on the testbed for multiple aspects including topology discovery, multimedia streaming, and virtual network integration. We will continue to extend our testbed and propose innovative applications for the next generation Internet.

Author(s):  
Olexander Melnikov ◽  
◽  
Konstantin Petrov ◽  
Igor Kobzev ◽  
Viktor Kosenko ◽  
...  

The article considers the development and implementation of cloud services in the work of government agencies. The classification of the choice of cloud service providers is offered, which can serve as a basis for decision making. The basics of cloud computing technology are analyzed. The COVID-19 pandemic has identified the benefits of cloud services in remote work Government agencies at all levels need to move to cloud infrastructure. Analyze the prospects of cloud computing in Ukraine as the basis of e-governance in development. This is necessary for the rapid provision of quality services, flexible, large-scale and economical technological base. The transfer of electronic information interaction in the cloud makes it possible to attract a wide range of users with relatively low material costs. Automation of processes and their transfer to the cloud environment make it possible to speed up the process of providing services, as well as provide citizens with minimal time to obtain certain information. The article also lists the risks that exist in the transition to cloud services and the shortcomings that may arise in the process of using them.


Author(s):  
Valentin Tablan ◽  
Ian Roberts ◽  
Hamish Cunningham ◽  
Kalina Bontcheva

Cloud computing is increasingly being regarded as a key enabler of the ‘democratization of science’, because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research—GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost–benefit analysis and usage evaluation.


Author(s):  
Shruthi P. ◽  
Nagaraj G. Cholli

Cloud Computing is the environment in which several virtual machines (VM) run concurrently on physical machines. The cloud computing infrastructure hosts multiple cloud service segments that communicate with each other using the interfaces. This creates distributed computing environment. During operation, the software systems accumulate errors or garbage that leads to system failure and other hazardous consequences. This status is called software aging. Software aging happens because of memory fragmentation, resource consumption in large scale and accumulation of numerical error. Software aging degrads the performance that may result in system failure. This happens because of premature resource exhaustion. This issue cannot be determined during software testing phase because of the dynamic nature of operation. The errors that cause software aging are of special types. These errors do not disturb the software functionality but target the response time and its environment. This issue is to be resolved only during run time as it occurs because of the dynamic nature of the problem. To alleviate the impact of software aging, software rejuvenation technique is being used. Rejuvenation process reboots the system or re-initiates the softwares. This avoids faults or failure. Software rejuvenation removes accumulated error conditions, frees up deadlocks and defragments operating system resources like memory. Hence, it avoids future failures of system that may happen due to software aging. As service availability is crucial, software rejuvenation is to be carried out at defined schedules without disrupting the service. The presence of Software rejuvenation techniques can make software systems more trustworthy. Software designers are using this concept to improve the quality and reliability of the software. Software aging and rejuvenation has generated a lot of research interest in recent years. This work reviews some of the research works related to detection of software aging and identifies research gaps.


2021 ◽  
Author(s):  
Edzer Pebesma ◽  
Patrick Griffiths ◽  
Christian Briese ◽  
Alexander Jacob ◽  
Anze Skerlevaj ◽  
...  

<p>The OpenEO API allows the analysis of large amounts of Earth Observation data using a high-level abstraction of data and processes. Rather than focusing on the management of virtual machines and millions of imagery files, it allows to create jobs that take a spatio-temporal section of an image collection (such as Sentinel L2A), and treat it as a data cube. Processes iterate or aggregate over pixels, spatial areas, spectral bands, or time series, while working at arbitrary spatial resolution. This pattern, pioneered by Google Earth Engine™ (GEE), lets the user focus on the science rather than on data management.</p><p>The openEO H2020 project (2017-2020) has developed the API as well as an ecosystem of software around it, including clients (JavaScript, Python, R, QGIS, browser-based), back-ends that translate API calls into existing image analysis or GIS software or services (for Sentinel Hub, WCPS, Open Data Cube, GRASS GIS, GeoTrellis/GeoPySpark, and GEE) as well as a hub that allows querying and searching openEO providers for their capabilities and datasets. The project demonstrated this software in a number of use cases, where identical processing instructions were sent to different implementations, allowing comparison of returned results.</p><p>A follow-up, ESA-funded project “openEO Platform” realizes the API and progresses the software ecosystem into operational services and applications that are accessible to everyone, that involve federated deployment (using the clouds managed by EODC, Terrascope, CreoDIAS and EuroDataCube), that will provide payment models (“pay per compute job”) conceived and implemented following the user community needs and that will use the EOSC (European Open Science Cloud) marketplace for dissemination and authentication. A wide range of large-scale cases studies will demonstrate the ability of the openEO Platform to scale to large data volumes.  The case studies to be addressed include on-demand ARD generation for SAR and multi-spectral data, agricultural demonstrators like crop type and condition monitoring, forestry services like near real time forest damage assessment as well as canopy cover mapping, environmental hazard monitoring of floods and air pollution as well as security applications in terms of vessel detection in the mediterranean sea.</p><p>While the landscape of cloud-based EO platforms and services has matured and diversified over the past decade, we believe there are strong advantages for scientists and government agencies to adopt the openEO approach. Beyond the absence of vendor/platform lock-in or EULA’s we mention the abilities to (i) run arbitrary user code (e.g. written in R or Python) close to the data, (ii) carry out scientific computations on an entirely open source software stack, (iii) integrate different platforms (e.g., different cloud providers offering different datasets), and (iv) help create and extend this software ecosystem. openEO uses the OpenAPI standard, aligns with modern OGC API standards, and uses the STAC (SpatioTemporal Asset Catalog) to describe image collections and image tiles.</p>


Author(s):  
Salvatore Distefano ◽  
Antonio Puliafito

Cloud computing is the new consolidated trend in ICT, often considered as the panacea to all the problems of existing large-scale distributed paradigms such as Grid and hierarchical clustering. The Cloud breakthrough is the service oriented perspective of providing everything “as a service”. Different from the others large-scale distributed paradigms, it was born from commercial contexts, with the aim of selling the temporarily unexploited computing resources of huge datacenters in order to reduce the costs. Since this business model is really attractive and convenient for both providers and consumers, the Cloud paradigm is quickly growing and widely spreading, even in non commercial context. In fact, several activities on the Cloud, such as Nimbus, Eucalyptus, OpenNEbula, and Reservoir, etc., have been undertaken, aiming at specifying open Cloud infrastructure middleware.


2017 ◽  
Vol 10 (13) ◽  
pp. 162
Author(s):  
Amey Rivankar ◽  
Anusooya G

Cloud computing is the latest trend in large-scale distributed computing. It provides diverse services on demand to distributive resources such asservers, software, and databases. One of the challenging problems in cloud data centers is to manage the load of different reconfigurable virtual machines over one another. Thus, in the near future of cloud computing field, providing a mechanism for efficient resource management will be very significant. Many load balancing algorithms have been already implemented and executed to manage the resources efficiently and adequately. The objective of this paper is to analyze shortcomings of existing algorithms and implement a new algorithm which will give optimized load balancingresult.


2020 ◽  
Vol 17 (9) ◽  
pp. 4156-4161
Author(s):  
Jeny Varghese ◽  
S. Jagannatha

Cloud Federation is the interconnection of two or more cloud computing settings in order to share configurable processing components such as networks, servers, apps that can be dynamically delivered to customers. Virtualization has been an integral part of cloud computing which provides manageability and utilization of resources. This paper analyses on how jobs of business applications demand and efficiently use the capacity of the resources that are provisioned by the VMs, thereby managing the performance of the applications. The in-depth assessment is based on two large-scale and constant performance traces gathered in a cloud datacenter that host company tools for running distinct apps with regard to requested and used resources.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Xiao Song ◽  
Yaofei Ma ◽  
Da Teng

A maturing and promising technology, Cloud computing can benefit large-scale simulations by providing on-demand, anywhere simulation services to users. In order to enable multitask and multiuser simulation systems with Cloud computing, Cloud simulation platform (CSP) was proposed and developed. To use key techniques of Cloud computing such as virtualization to promote the running efficiency of large-scale military HLA systems, this paper proposes a new type of federate container, virtual machine (VM), and its dynamic migration algorithm considering both computation and communication cost. Experiments show that the migration scheme effectively improves the running efficiency of HLA system when the distributed system is not saturated.


Author(s):  
Dina Mohsen Zoughbi ◽  
Nitul Dutta

Cloud computing is the most important technology at the present time, in terms of reducing applications costs and makes them more scalable and flexible. As the cloud currency is based on building virtualization technology, so it can secure a large-scale environment with limited security capacity such as the cloud. Where, Malicious activities lead the attackers to penetrate virtualization technologies that endanger the infrastructure, and then enabling attacker access to other virtual machines which running on the same vulnerable device. The proposed work in this paper is to review and discuss the attacks and intrusions that allow a malicious virtual machine (VM) to penetrate hypervisor, especially the technologies that malicious virtual machines work on, to steal more than their allocated quota from material resources, and the use of side channels to steal data and Passing buffer barriers between virtual machines. This paper is based on the Security Study of Cloud Hypervisors and classification of vulnerabilities, security issues, and possible solutions that virtual machines are exposed to. Therefore, we aim to provide researchers, academics, and industry with a better understanding of all attacks and defense mechanisms to protect cloud security. and work on building a new security architecture in a virtual technology based on hypervisor to protect and ensure the security of the cloud.


2021 ◽  
Vol 23 (07) ◽  
pp. 352-357
Author(s):  
Gautham S ◽  
◽  
Maddula Abhijit ◽  
Prof. Sahana. B ◽  
◽  
...  

Cloud computing is a method of storing and manipulating data by utilizing a network of remote servers. Cloud computing is becoming increasingly popular owing to its large storage capacity, ease of access, and wide range of services. Virtualization entered the picture when cloud computing progressed, and technologies or software such as virtual machines emerged. However, when customers’ computational needs for storage and servers rose, virtual machines were unable to meet those expectations owing to scalability and resource allocation limitations. As a result, containerization came into the picture. Containerization refers to the packaging of software code together with all of its necessary elements such as frameworks, libraries, and other dependencies such that they are isolated or segregated in their own container. Kubernetes used as an orchestration tool implements an ingress controller to route external traffic to deployments running on pods via ingress resource. This enables effective traffic management among the running applications avoiding unwanted blackouts in the production environment.


Sign in / Sign up

Export Citation Format

Share Document