scholarly journals Fast Analysis and Prediction in Large Scale Virtual Machines Resource Utilisation

Author(s):  
Abdullahi Abubakar ◽  
Sakil Barbhuiya ◽  
Peter Kilpatrick ◽  
Ngo Vien ◽  
Dimitrios Nikolopoulos
2018 ◽  
Vol 51 (7-8) ◽  
pp. 360-367
Author(s):  
Geng Liang ◽  
Wen Li

Traditionally, routers and other network devices encompass both data and control functions in most large enterprise networks, making it difficult to adjust the network infrastructure and operation to large-scale addition of end systems, virtual machines, and virtual networks in industrial comprehensive automation. A network organizing technique that has come to recent prominence is the Software-Defined Network (SDN). A novel SDN based industrial control network (SDNICN) was proposed in this paper. Intelligent network components are included in a SDNICN. Switches in SDNICN provided fundamental network interconnection for the whole industrial control network. Network controller is used for data transmission, forwarding and routing control between different layers. Service Management Center (SMC) is essentially responsible for managing various services used in industrial process control. SDNICN can not only greatly improve the flexibility and performance of industrial control network but also meet the intelligence and informatization of the future industry.


Author(s):  
Valentin Tablan ◽  
Ian Roberts ◽  
Hamish Cunningham ◽  
Kalina Bontcheva

Cloud computing is increasingly being regarded as a key enabler of the ‘democratization of science’, because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research—GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost–benefit analysis and usage evaluation.


Author(s):  
Shruthi P. ◽  
Nagaraj G. Cholli

Cloud Computing is the environment in which several virtual machines (VM) run concurrently on physical machines. The cloud computing infrastructure hosts multiple cloud service segments that communicate with each other using the interfaces. This creates distributed computing environment. During operation, the software systems accumulate errors or garbage that leads to system failure and other hazardous consequences. This status is called software aging. Software aging happens because of memory fragmentation, resource consumption in large scale and accumulation of numerical error. Software aging degrads the performance that may result in system failure. This happens because of premature resource exhaustion. This issue cannot be determined during software testing phase because of the dynamic nature of operation. The errors that cause software aging are of special types. These errors do not disturb the software functionality but target the response time and its environment. This issue is to be resolved only during run time as it occurs because of the dynamic nature of the problem. To alleviate the impact of software aging, software rejuvenation technique is being used. Rejuvenation process reboots the system or re-initiates the softwares. This avoids faults or failure. Software rejuvenation removes accumulated error conditions, frees up deadlocks and defragments operating system resources like memory. Hence, it avoids future failures of system that may happen due to software aging. As service availability is crucial, software rejuvenation is to be carried out at defined schedules without disrupting the service. The presence of Software rejuvenation techniques can make software systems more trustworthy. Software designers are using this concept to improve the quality and reliability of the software. Software aging and rejuvenation has generated a lot of research interest in recent years. This work reviews some of the research works related to detection of software aging and identifies research gaps.


2018 ◽  
Vol 33 (3) ◽  
pp. 348-366
Author(s):  
Paul Ezhilchelvan ◽  
Isi Mitrani

A cloud provider hosts virtual machines (VMs) of different types, with different resource requirements. There are bounds on the total amounts of each kind of resource that are available. Requests arrive in batches of different sizes. Under the ‘complete blocking’ policy, a request is accepted only if all the VMs in its batch can be accommodated. The ‘partial blocking’ policy would accept a request if there is room for at least one of the VMs in the batch. Blocked requests are lost, with an associated loss of revenue. The trade-offs between costs and benefits are evaluated by means of appropriate models, for which novel solutions based on fixed-point iterations are proposed. The applicability of those solutions is extended, by means of simplifications, to very large-scale systems. Numerical examples and comparisons with simulations are presented.


2021 ◽  
Author(s):  
Edzer Pebesma ◽  
Patrick Griffiths ◽  
Christian Briese ◽  
Alexander Jacob ◽  
Anze Skerlevaj ◽  
...  

<p>The OpenEO API allows the analysis of large amounts of Earth Observation data using a high-level abstraction of data and processes. Rather than focusing on the management of virtual machines and millions of imagery files, it allows to create jobs that take a spatio-temporal section of an image collection (such as Sentinel L2A), and treat it as a data cube. Processes iterate or aggregate over pixels, spatial areas, spectral bands, or time series, while working at arbitrary spatial resolution. This pattern, pioneered by Google Earth Engine™ (GEE), lets the user focus on the science rather than on data management.</p><p>The openEO H2020 project (2017-2020) has developed the API as well as an ecosystem of software around it, including clients (JavaScript, Python, R, QGIS, browser-based), back-ends that translate API calls into existing image analysis or GIS software or services (for Sentinel Hub, WCPS, Open Data Cube, GRASS GIS, GeoTrellis/GeoPySpark, and GEE) as well as a hub that allows querying and searching openEO providers for their capabilities and datasets. The project demonstrated this software in a number of use cases, where identical processing instructions were sent to different implementations, allowing comparison of returned results.</p><p>A follow-up, ESA-funded project “openEO Platform” realizes the API and progresses the software ecosystem into operational services and applications that are accessible to everyone, that involve federated deployment (using the clouds managed by EODC, Terrascope, CreoDIAS and EuroDataCube), that will provide payment models (“pay per compute job”) conceived and implemented following the user community needs and that will use the EOSC (European Open Science Cloud) marketplace for dissemination and authentication. A wide range of large-scale cases studies will demonstrate the ability of the openEO Platform to scale to large data volumes.  The case studies to be addressed include on-demand ARD generation for SAR and multi-spectral data, agricultural demonstrators like crop type and condition monitoring, forestry services like near real time forest damage assessment as well as canopy cover mapping, environmental hazard monitoring of floods and air pollution as well as security applications in terms of vessel detection in the mediterranean sea.</p><p>While the landscape of cloud-based EO platforms and services has matured and diversified over the past decade, we believe there are strong advantages for scientists and government agencies to adopt the openEO approach. Beyond the absence of vendor/platform lock-in or EULA’s we mention the abilities to (i) run arbitrary user code (e.g. written in R or Python) close to the data, (ii) carry out scientific computations on an entirely open source software stack, (iii) integrate different platforms (e.g., different cloud providers offering different datasets), and (iv) help create and extend this software ecosystem. openEO uses the OpenAPI standard, aligns with modern OGC API standards, and uses the STAC (SpatioTemporal Asset Catalog) to describe image collections and image tiles.</p>


Author(s):  
Aleksandra Kostic-Ljubisavljevic ◽  
Branka Mikavica

All vertically integrated participants in content provisioning process are influenced by bandwidth requirements. Provisioning of self-owned resources that satisfy peak bandwidth demand leads to network underutilization and it is cost ineffective. Under-provisioning leads to rejection of customers' requests. Vertically integrated providers need to consider cloud migration in order to minimize costs and improve quality of service and quality of experience of their customers. Cloud providers maintain large-scale data centers to offer storage and computational resources in the form of virtual machines instances. They offer different pricing plans: reservation, on-demand, and spot pricing. For obtaining optimal integration charging strategy, revenue sharing, cost sharing, wholesale price is applied frequently. The vertically integrated content provider's incentives for cloud migration can induce significant complexity in integration contracts, and consequently improvements in costs and requests' rejection rate.


Author(s):  
Aleksandra Kostic-Ljubisavljevic ◽  
Branka Mikavica

All vertically integrated participants in content provisioning process are influenced by bandwidth requirements. Provisioning of self-owned resources that satisfy peak bandwidth demand leads to network underutilization and it is cost ineffective. Under-provisioning leads to rejection of customers' requests. Vertically integrated providers need to consider cloud migration in order to minimize costs and improve Quality of Service and Quality of Experience of their customers. Cloud providers maintain large-scale data centres to offer storage and computational resources in the form of Virtual Machines instances. They offer different pricing plans: reservation, on-demand and spot pricing. For obtaining optimal integration charging strategy, Revenue Sharing, Cost Sharing, Wholesale Price is applied frequently. The vertically integrated content provider's incentives for cloud migration can induce significant complexity in integration contracts, and consequently improvements in costs and requests' rejection rate.


Author(s):  
R. Jeyarani ◽  
N. Nagaveni ◽  
R. Vasanth Ram

Cloud Computing provides dynamic leasing of server capabilities as a scalable, virtualized service to end users. The discussed work focuses on Infrastructure as a Service (IaaS) model where custom Virtual Machines (VM) are launched in appropriate servers available in a data-center. The context of the environment is a large scale, heterogeneous and dynamic resource pool. Nonlinear variation in the availability of processing elements, memory size, storage capacity, and bandwidth causes resource dynamics apart from the sporadic nature of workload. The major challenge is to map a set of VM instances onto a set of servers from a dynamic resource pool so the total incremental power drawn upon the mapping is minimal and does not compromise the performance objectives. This paper proposes a novel Self Adaptive Particle Swarm Optimization (SAPSO) algorithm to solve the intractable nature of the above challenge. The proposed approach promptly detects and efficiently tracks the changing optimum that represents target servers for VM placement. The experimental results of SAPSO was compared with Multi-Strategy Ensemble Particle Swarm Optimization (MEPSO) and the results show that SAPSO outperforms the latter for power aware adaptive VM provisioning in a large scale, heterogeneous and dynamic cloud environment.


2017 ◽  
Vol 10 (13) ◽  
pp. 162
Author(s):  
Amey Rivankar ◽  
Anusooya G

Cloud computing is the latest trend in large-scale distributed computing. It provides diverse services on demand to distributive resources such asservers, software, and databases. One of the challenging problems in cloud data centers is to manage the load of different reconfigurable virtual machines over one another. Thus, in the near future of cloud computing field, providing a mechanism for efficient resource management will be very significant. Many load balancing algorithms have been already implemented and executed to manage the resources efficiently and adequately. The objective of this paper is to analyze shortcomings of existing algorithms and implement a new algorithm which will give optimized load balancingresult.


Sign in / Sign up

Export Citation Format

Share Document