scholarly journals Optimizing resources to mitigate stragglers through virtualization in run time

2021 ◽  
Vol 23 (08) ◽  
pp. 931-935
Author(s):  
Ajay Kumar Bansal ◽  
◽  
Manmohan Sharma ◽  
Ashu Gupta ◽  
◽  
...  

Modern computing systems are generally enormous in scale, consisting of hundreds to thousands of heterogeneous machine nodes, to meet rising demands for Cloud services. MapReduce and other parallel computing frameworks are frequently used on such cluster architecture to offer consumers dependable and timely services. However, Cloud workloads’ complex features, such as multi-dimensional resource requirements and dynamically changing system settings, such as dynamic node performance, are posing new difficulties for providers in terms of both customer experience and system efficiency. The straggler problem occurs when a small subset of parallelized jobs takes an excessively long time to execute in contrast to their siblings, resulting in a delayed job response and the possibility of late-timing failure. Speculative execution is the state-of-the-art method to straggler mitigation. Speculative execution has been used in numerous real-world systems with a variety of implementation improvements, but the results of this thesis’ research demonstrate that it is typically wasteful. The failure rate of speculative execution might be as high as 71 percent, according to different data center production trace logs. Straggler mitigation is a difficult task in and of itself: 1) stragglers may have varying degrees of severity in parallel job execution; 2) whether a task should be considered a straggler is highly subjective, depending on various application and system conditions; 3) the efficiency of speculative execution would be improved if dynamic node quality could be adequately modeled and predicted; 4) Other sorts of stragglers, such as those generated by data skews, are beyond speculative execution’s capabilities.

2018 ◽  
Vol 51 (3) ◽  
pp. 485-502 ◽  
Author(s):  
Ezequiel Heffes

This review explores certain challenges related to the notion of customary international law. It seems that it was a long time ago when international law academics and practitioners ever thought that the nature of this source was a well-settled topic. Nowadays international lawmaking processes involve an extraordinary number of interactions, taking place both formally and informally. Such complex features are reflected by an exponential increase in the scholarly study of international legal sources. The legal nature, its applicability and principles regulating customary international law are addressed in the book under review (Brian D Lepard (ed), Reexamining Customary International Law (Cambridge University Press 2017)) through several topical essays. The chapters offer a comprehensive analysis of these lawmaking processes and the challenges they portray from various perspectives and in various fields, such as: What is customary international law and why is it law? Is it law because it reflects a ‘global legislative’ model? What is the current value of the persistent objector theory? Is the two-element definition of customary international law still applicable? By meticulously addressing these and other inquiries, the book presents novel arguments and represents a stimulating addition to the literature on sources of international law.


Author(s):  
Ovunc Kocabas ◽  
Regina Gyampoh-Vidogah ◽  
Tolga Soyata

This chapter describes the concepts and cost models used for determining the cost of providing cloud services to mobile applications using different pricing models. Two recently implemented mobile-cloud applications are studied in terms of both the cost of providing such services by the cloud operator, and the cost of operating them by the cloud user. Computing resource requirements of both applications are identified and worksheets are presented to demonstrate how businesses can estimate the operational cost of implementing such real-time mobile cloud applications at a large scale, as well as how much cloud operators can profit from providing resources for these applications. In addition, the nature of available service level agreements (SLA) and the importance of quality of service (QoS) specifications within these SLAs are emphasized and explained for mobile cloud application deployment.


Author(s):  
Shailendra Singh ◽  
Sunita Gond

As this is the age of technology and every day we are receiving the news about growing popularity of internet and its applications. Cloud computing is an emerging paradigm of today that is rapidly accepted by the industry/organizations/educational institutions etc. for various applications and purpose. As computing is related to distributed and parallel computing which are from a very long time in the market, but today is the world of cloud computing that reduces the cost of computing by focusing on personal computing to data center computing. Cloud computing architecture and standard provide a unique way for delivering computation services to cloud users. It is having a simple API (Application Platform Interface) to users for accessing storage, platform and hardware by paying-as-per-use basis. Services provided by cloud computing is as same as other utility oriented services like electricity bill, water, telephone etc. over shared network. There are many cloud services providers in the market for providing services like Google, Microsoft, Manjrasoft Aneka, etc.


2014 ◽  
Vol 577 ◽  
pp. 860-864
Author(s):  
Liang Liu ◽  
Tian Yu Wo

With cloud computing systems becoming popular, it has been a hotspot to design a scalable, highly available and cost-effective data platform. This paper proposed such a data platform using MySQL DBMS blocks. For scalability, a three-level (system, super-cluster, cluster) architecture is applied, making it scalable to thousands of applications. For availability, we use asynchronous replication across geographically dispersed super clusters to provide disaster recovery, synchronous replication within a cluster to perform failure recovery and hot standby or even process pair mechanism for controllers to enhance fault tolerance. For resource utility, we design a novel load balancing strategy by exploiting the key property that the throughput requirement of web applications is flucatuated in a time period. Experiments with NLPIR dataset indicate that the system can scale to a large number of web applications and make good use of resources provided.


2014 ◽  
Vol 543-547 ◽  
pp. 2933-2936
Author(s):  
Jun Rong Li ◽  
Wen Bo Zhou ◽  
Li Wen Mu ◽  
Tong Yu Yin ◽  
Yuan Li Feng

Different cloud services need different resources, how to use the resources efficiently has become one of the hot research topics about cloud computing. In order to improve the utilization of resources in cloud, this paper proposes an automatic cloud service classification method, which uses an artificial neural network to predict the type of service resource requirements, and classifies services based on the predicting result. In this paper, we do classification experiments on three groups of Web services on Web service site. The experiment results show that, the method is effective and can predict the type of resource requirements for Web services automatically.


2021 ◽  
Vol 7 (3) ◽  
pp. 73-78
Author(s):  
D. Shchemelinin

Monitoring events and predicting the behavior of a dynamic information system are becoming increasingly important due to the globalization of cloud services and a sharp increase in the volume of processed data. Well-known monitoring systems are used for the timely detection and prompt correction of the anomaly, which require new, more effective and proactive forecasting tools. At the CMG-2013 conference, a method for predicting memory leaks in Java applications was presented, which allows IT teams to automatically release resources by safely restarting services when a certain critical threshold value is reached. Article’s solution implements a simple linear mathematical model for describing the historical trend function. However, in practice, the degradation of memory and other computational resources may not occur gradually, but very quickly, depending on the workload, and therefore, solving the forecasting problem using linear methods is not effective enough.


Author(s):  
Clive Hamilton

Greenhouse gases emitted anywhere affect people everywhere, and they will do so for a very long time. Progress on an international response to climate change has been bedeviled by ethical, political, and economic fractures, highlighting the severe limitations of the Westphalian state system. Non-state actors have played a crucial role in negotiations; some are “internationalist,” whereas others are “globalist.” Climate change is inseparable from capitalism’s insatiable appetite for growth. The rise of China destabilizes previous understandings of the world, including those of global studies and world-systems analysis. There are signs of a new cosmopolitanism, although securitization of the climate threat works against it. The globality of the natural world calls for a rethinking of global studies.


2021 ◽  
Vol 27 (2) ◽  
Author(s):  
Osuolale A. Festus ◽  
Adewale O. Sunday ◽  
Alese K. Boniface

The introduction of computers has been a huge plus to human life in its entirety because it provides both the world of business and private an easy and fast means to process, generate and exchange information. However, the proliferation of networked devices, internet services and the amount of data being generated frequently is enormous. This poses a major challenge, to the procurement cost of high performing computers and servers capable of processing and housing the big data. This called for the migration of organizational and/or institutional data upload to the cloud for highlevel of productivity at a low cost. Therefore, with high demand for cloud services and resources by users who migrated to the cloud, cloud computing systems have experienced an increase in outages or failures in real-time cloud computing environment and thereby affecting its reliability and availability. This paper proposes and simulates a system comprising four components: the user, task controller, fault detector and fault tolerance layers to mitigate the occurrence of fault combining checkpointing and replication techniques using cloud simulator (CloudSim).


2019 ◽  
pp. 812-823
Author(s):  
K. Y. B. Williams ◽  
Jimmy A. G. Griffin

Better security and encryption is necessary with regard to all forms of Cloud Computing, Cloud Infrastructure, and Cloud Storage. Areas that are affected the hardest by security breaches include: retail/e-commerce, communications, transportation, and banking. Illustrated within this article are ways that companies such as Walmart, Verizon, Wells-Fargo, and BWM would be affected by a lapse in security and/or a breach in their Cloud Infrastructure. In this article issues that can magnify these breaches and data loss is discussed as it relates to Cloud Structure and Cloud Services based on known vulnerabilities and lack of product testing. This article concludes with why it is necessary to have Public Policies as part of the governing system on Cloud Computing, Cloud Infrastructure, and Cloud Storage


2012 ◽  
pp. 661-676
Author(s):  
Xiaoxin Wu ◽  
Huan Chen ◽  
Yaoda Liu ◽  
Wenwu Zhu

Energy saving has been studied widely in both of computing and communication research communities. For handheld devices, energy is becoming a more and more critical issue because lots of applications running on handhelds today are computation or communication intensive and take a long time to finish. Unlike previous work that proposes computing or communication energy solutions alone, this paper proposes a novel energy savings approach through mobile collaborative systems, which jointly consider computing and communication energy cost. In this work, the authors use streaming video as investigated application scenario and propose multi-hop pipelined wireless collaborative system to decode video frames with a requirement for maximum inter-frame time. To finish a computing task with such a requirement, this paper proposes a control policy that can dynamically adapt processor frequency and communication transmission rate at the collaborative devices. The authors build a mathematical energy model for collaborative computing systems. Results show that the collaborative system helps save energy, and the transmission rate between collaborators is a key parameter for maximizing energy savings. The energy saving algorithm in computing devices is implemented and the experimental results show the same trend.


Sign in / Sign up

Export Citation Format

Share Document