scholarly journals A LAYERED FRAMEWORK FOR CONNECTING CLIENT OBJECTIVES AND RESOURCE CAPABILITIES

2006 ◽  
Vol 15 (03) ◽  
pp. 391-413 ◽  
Author(s):  
ASIT DAN ◽  
KAVITHA RANGANATHAN ◽  
CATALIN L. DUMITRESCU ◽  
MATEI RIPEANU

In large-scale, distributed systems such as Grids, an agreement between a client and a service provider specifies service level objectives both as expressions of client requirements and as provider assurances. From an application perspective, these objectives should be expressed in a high-level, service or application-specific manner rather than requiring clients to detail the necessary resources. Resource providers on the other hand, expect low-level, resource-specific performance criteria that are uniform across applications and can be easily interpreted and provisioned. This paper presents a framework for service management that addresses this gap between high-level specification of client performance objectives and existing resource management infrastructures. The paper identifies three levels of abstraction for resource requirements a service provider needs to manage, namely: detailed specification of raw resources, virtualization of heterogeneous resources as abstract resources, and performance objectives at an application level. The paper also identifies three key functions for managing service-level agreements, namely: translation of resource requirements across abstraction layers, arbitration in allocating resources to client requests, and aggregation and allocation of resources from multiple lower-level resource managers. One or more of these key functions may be present at each abstraction layer of a service-level manager. Thus, layering and the composition of these functions across abstraction layers enables modeling of a wide array of management scenarios. The framework we present uses service metadata and/or service performance models to map client requirements to resource capabilities, uses business value associated with objectives to arbitrate between competing requests, and allocates resources based on previously negotiated agreements. We instantiate this framework for three different scenarios and explain how the architectural principles we introduce are used in the real-word.

Author(s):  
Mostafa Rizk ◽  
Amer Baghdadi ◽  
Michel Jézéquel

Emergent wireless communication standards, which are employed in different transmission environments, support various modulation schemes. High-order constellations are targeted to achieve high bandwidth efficiency. However, the complexity of the symbol-by-symbol Maximum A Posteriori (MAP) algorithm increases dramatically for these high-order modulation schemes. In order to reduce the hardware complexity, the suboptimal Max-Log-MAP, which is the direct transformation of the MAP algorithm into logarithmic domain, is alternatively implemented. In the literature, a great deal of research effort has been invested into Max-Log-MAP demapping. Several simplifications are presented to meet with specific constellations. In addition, the hardware implementations dedicated for Max-Log-MAP demapping vary greatly in terms of design choices, supported flexibility and performance criteria, making them a challenge to compare. This paper explores the published Max-Log-MAP algorithm simplifications and existing hardware demapper designs and presents an extensive review of the current literature. In-depth comparisons are drawn amongst the designs and different key performance characteristics are described, namely, achieved throughput, hardware resource requirements and flexibility. This survey should facilitate fair comparisons of future designs, as well as opportunities for improving the design of Max-Log-MAP demappers.


2018 ◽  
Vol 51 (7-8) ◽  
pp. 360-367
Author(s):  
Geng Liang ◽  
Wen Li

Traditionally, routers and other network devices encompass both data and control functions in most large enterprise networks, making it difficult to adjust the network infrastructure and operation to large-scale addition of end systems, virtual machines, and virtual networks in industrial comprehensive automation. A network organizing technique that has come to recent prominence is the Software-Defined Network (SDN). A novel SDN based industrial control network (SDNICN) was proposed in this paper. Intelligent network components are included in a SDNICN. Switches in SDNICN provided fundamental network interconnection for the whole industrial control network. Network controller is used for data transmission, forwarding and routing control between different layers. Service Management Center (SMC) is essentially responsible for managing various services used in industrial process control. SDNICN can not only greatly improve the flexibility and performance of industrial control network but also meet the intelligence and informatization of the future industry.


Author(s):  
Ovunc Kocabas ◽  
Regina Gyampoh-Vidogah ◽  
Tolga Soyata

This chapter describes the concepts and cost models used for determining the cost of providing cloud services to mobile applications using different pricing models. Two recently implemented mobile-cloud applications are studied in terms of both the cost of providing such services by the cloud operator, and the cost of operating them by the cloud user. Computing resource requirements of both applications are identified and worksheets are presented to demonstrate how businesses can estimate the operational cost of implementing such real-time mobile cloud applications at a large scale, as well as how much cloud operators can profit from providing resources for these applications. In addition, the nature of available service level agreements (SLA) and the importance of quality of service (QoS) specifications within these SLAs are emphasized and explained for mobile cloud application deployment.


2020 ◽  
Vol 17 (9) ◽  
pp. 3904-3906
Author(s):  
Susmita J. A. Nair ◽  
T. R. Gopalakrishnan Nair

Increasing demand of computing resources and the popularity of cloud computing have led the organizations to establish of large-scale data centers. To handle varying workloads, allocating resources to Virtual Machines, placing the VMs in the most suitable physical machine at data centers without violating the Service Level Agreement remains a big challenge for the cloud providers. The energy consumption and performance degradation are the prime focus for the data centers in providing services by strictly following the SLA. In this paper we are suggesting a model for minimizing the energy consumption and performance degradation without violating SLA. The experiments conducted have shown a reduction in SLA violation by nearly 10%.


Author(s):  
Oshin Sharma ◽  
Hemraj Saini

Cloud computing has revolutionized the working models of IT industry and increasing the demand of cloud resources which further leads to increase in energy consumption of data centers. Virtual machines (VMs) are consolidated dynamically to reduce the number of host machines inside data centers by satisfying the customer's requirements and quality of services (QoS). Moreover, for using the services of cloud environment every cloud user has a service level agreement (SLA) that deals with energy and performance trade-offs. As, the excess of consolidation and migration may degrade the performance of system, therefore, this paper focuses the overall performance of the system instead of energy consumption during the consolidation process to maintain a trust level between cloud's users and providers. In addition, the paper proposed three different heuristics for virtual machine (VM) placement based on current and previous usage of resources. The proposed heuristics ensure a high level of service level agreements (SLA) and better performance of ESM metric in comparison to previous research.


2016 ◽  
pp. 1111-1137
Author(s):  
Ovunc Kocabas ◽  
Regina Gyampoh-Vidogah ◽  
Tolga Soyata

This chapter describes the concepts and cost models used for determining the cost of providing cloud services to mobile applications using different pricing models. Two recently implemented mobile-cloud applications are studied in terms of both the cost of providing such services by the cloud operator, and the cost of operating them by the cloud user. Computing resource requirements of both applications are identified and worksheets are presented to demonstrate how businesses can estimate the operational cost of implementing such real-time mobile cloud applications at a large scale, as well as how much cloud operators can profit from providing resources for these applications. In addition, the nature of available service level agreements (SLA) and the importance of quality of service (QoS) specifications within these SLAs are emphasized and explained for mobile cloud application deployment.


Author(s):  
M. AUGUSTON ◽  
P. FRITZSON

PARFORMAN (PARallel FORMal ANnotation language) is a high-level specification language for expressing intended behavior or known types of error conditions when debugging or testing parallel programs. Models of intended or faulty target program behavior can be succinctly specified in PARFORMAN. These models are then compared with the actual behavior in terms of execution traces of events, in order to localize possible bugs. PARFORMAN can also be used as a general language for expressing computations over target program execution histories. PARFORM AN is based on a precise model of target program behavior. This model, called H-space (History-space), is formally defined through a set of general axioms about three basic relations, which may or may not hold between two arbitrary events: they may be sequentially ordered (SEQ), they may be parallel (PAR), or one of them might be included in another composite event (IN). The general notion of composite event is exploited systematically, which makes possible more powerful and succinct specifications. The notion of event grammar is introduced to describe allowed event patterns over a certain application domain or language. Auxiliary composite events such as Snapshots are introduced to be able to define the notion “occurred at the same time” at suitable levels of abstraction. Finally, patterns and aggregate operations on events are introduced to make possible short and readable specifications. In addition to debugging and testing, PARFORMAN can also be used to specify profiles and performance measurements.


Author(s):  
K. Yoshimura ◽  
I. Gaus ◽  
K. Kaku ◽  
T. Sakaki ◽  
A. Deguchi ◽  
...  

Large scale demonstration experiments in underground research laboratories (both onsite and off-site) are currently undertaken by most high level radioactive waste management organisations. The decision to plan and implement prototype experiments, which might have a life of several decades, has both important strategic and budgetary consequences for the organisation. Careful definition of experimental objectives based on the design and safety requirements is critical. The implementation requires the involvement of many parties and needs flexible but consequent management as, for example, additional goals for the experiments, identified in the course of the implementation, might jeopardise initial primary goals. The outcomes of an international workshop in which European and Japanese implementers (SKB, Posiva, Andra, ONDRAF, NUMO and Nagra) but also certain research organisations (JAEA, RWMC) participated identified which experiments are likely to be needed depending on the progress in implementing a disposal programme. Already earlier in a programme, large scale demonstrations are generally performed aiming at reducing uncertainties identified during the safety case development such as thermo-hydraulic-mechanical process validation in the engineered barrier system and target host rock. Also feasibility testing of underground construction in a potential host rock at relevant depth might be required. Later in a programme, i.e., closer to the license application, large scale experiments aim largely at demonstrating engineering feasibility and performance confirmation of complete repository components. Ultimately, before licensing repository operation, 1:1 scale commissioning testing will be required. Factors contributing to the successful completion of large scale demonstration experiments in terms of planning, defining the objectives, optimising results and main lessons learned over the last 30 years are being discussed. The need for international coordination in defining the objectives of new large scale demonstration experiments is addressed. The paper is expected to provide guidance to implementing organisations (especially those in their early stages of the programme), considering participating in and/or or conducting on their own large scale experiments in the near future.


Author(s):  
Wei-Chih Huang ◽  
William J. Knottenbelt

As the variety of execution environments and application contexts increases exponentially, modern software is often repeatedly refactored to meet ever-changing non-functional requirements. Although programmer effort can be reduced through the use of standardised libraries, software adjustment for scalability, reliability, and performance remains a time-consuming and manual job that requires high levels of expertise. Previous research has proposed three broad classes of techniques to overcome these difficulties in specific application domains: probabilistic techniques, out of core storage, and parallelism. However, due to limited cross-pollination of knowledge between domains, the same or very similar techniques have been reinvented all over again, and the application of techniques still requires manual effort. This chapter introduces the vision of self-adaptive scalable resource-efficient software that is able to reconfigure itself with little other than programmer-specified Service-Level Objectives and a description of the resource constraints of the current execution environment. The approach is designed to be low-overhead from the programmer's perspective – indeed a naïve implementation should suffice. To illustrate the vision, the authors have implemented in C++ a prototype library of self-adaptive containers, which dynamically adjust themselves to meet non-functional requirements at run time and which automatically deploy mitigating techniques when resource limits are reached. The authors describe the architecture of the library and the functionality of each component, as well as the process of self-adaptation. They explore the potential of the library in the context of a case study, which shows that the library can allow a naïve program to accept large-scale input and become resource-aware with very little programmer overhead.


Sign in / Sign up

Export Citation Format

Share Document