scholarly journals Adaptive Cost-Based Task Scheduling in Cloud Environment

2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Mohammed A. S. Mosleh ◽  
G. Radhamani ◽  
Mohamed A. G. Hazber ◽  
Syed Hamid Hasan

Task execution in cloud computing requires obtaining stored data from remote data centers. Though this storage process reduces the memory constraints of the user’s computer, the time deadline is a serious concern. In this paper, Adaptive Cost-based Task Scheduling (ACTS) is proposed to provide data access to the virtual machines (VMs) within the deadline without increasing the cost. ACTS considers the data access completion time for selecting the cost effective path to access the data. To allocate data access paths, the data access completion time is computed by considering the mean and variance of the network service time and the arrival rate of network input/output requests. Then the task priority is assigned to the removed tasks based data access time. Finally, the cost of data paths are analyzed and allocated based on the task priority. Minimum cost path is allocated to the low priority tasks and fast access path are allocated to high priority tasks as to meet the time deadline. Thus efficient task scheduling can be achieved by using ACTS. The experimental results conducted in terms of execution time, computation cost, communication cost, bandwidth, and CPU utilization prove that the proposed algorithm provides better performance than the state-of-the-art methods.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Redwan A. Al-dilami ◽  
Ammar T. Zahary ◽  
Adnan Z. Al-Saqqaf

Issues of task scheduling in the centre of cloud computing are becoming more important, and the cost is one of the most important parameters used for scheduling tasks. This study aims to investigate the problem of online task scheduling of the identified job of MapReduce on cloud computing infrastructure. It was proposed that the virtualized cloud computing setup comprised machines that host multiple identical virtual machines (VMs) that need to be activated earlier and run continuously, and booting a VM requires a constant setup time. A VM that remains running even though it is no longer used is considered an idle VM. Furthermore, this study aims to distribute the idle cost of the VMs rather than the cost of setting up them among tasks in a fair manner. This study also is an extension of previous studies which solved the problems that occurred when distributing the idle cost and setting up the cost of VMs among tasks. It classifies the tasks into three groups (long, mid, and short) and distributes the idle cost among the groups then among the tasks of the groups. The main contribution of this paper is the developing of a clairvoyant algorithm that addressed important factors such as the delay and the cost that occurred by waiting to setup VM (active VM). Also, when the VMs are run continually and some VMs become in idle state, the idle cost will be distributed among the current tasks in a fair manner. The results of this study, in comparison with previous studies, showed that the idle cost and the setup cost that was distributed among tasks were better than the idle cost and the setup cost distributed in those studies.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Shuzhen Wan ◽  
Lixin Qi

An important problem in cloud computing faces the challenge of scheduling tasks to virtual machines to meet the cost and time demands, while maintaining the Quality of Service (QoS). Allocating tasks into cloud resources is a difficult problem due to the uncertainty of consumers’ future requirements and the diversity of providers’ resources. Previous studies, either on modeling or scheduling approaches, can no longer offer a satisfactory solution. In this paper, we establish a resource allocation framework and propose a novel task scheduling algorithm. An improved coral reef optimization (ICRO) is proposed to deal with this task scheduling problem. In ICRO, the better-offspring and multicrossover strategies increase the convergent speed and improve the quality of solutions. In addition, a novel load balance-aware mutation enhances the load balance among virtual machines and adjusts the number of resources provided to users. Experimental results show that compared with other algorithms, ICRO can significantly reduce the makespan and cost of the scheduling, while maintaining a better load balance in the system.


2020 ◽  
Vol 54 (6) ◽  
pp. 1775-1791
Author(s):  
Nazila Aghayi ◽  
Samira Salehpour

The concept of cost efficiency has become tremendously popular in data envelopment analysis (DEA) as it serves to assess a decision-making unit (DMU) in terms of producing minimum-cost outputs. A large variety of precise and imprecise models have been put forward to measure cost efficiency for the DMUs which have a role in constructing the production possibility set; yet, there’s not an extensive literature on the cost efficiency (CE) measurement for sample DMUs (SDMUs). In an effort to remedy the shortcomings of current models, herein is introduced a generalized cost efficiency model that is capable of operating in a fuzzy environment-involving different types of fuzzy numbers-while preserving the Farrell’s decomposition of cost efficiency. Moreover, to the best of our knowledge, the present paper is the first to measure cost efficiency by using vectors. Ultimately, a useful example is provided to confirm the applicability of the proposed methods.


Author(s):  
Shailendra Raghuvanshi ◽  
Priyanka Dubey

Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing, which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue using workflowsim simulator in JAVA.


2020 ◽  
Vol 26 (3) ◽  
pp. 685-697
Author(s):  
O.V. Shimko

Subject. The study analyzes generally accepted approaches to assessing the value of companies on the basis of financial statement data of ExxonMobil, Chevron, ConocoPhillips, Occidental Petroleum, Devon Energy, Anadarko Petroleum, EOG Resources, Apache, Marathon Oil, Imperial Oil, Suncor Energy, Husky Energy, Canadian Natural Resources, Royal Dutch Shell, Gazprom, Rosneft, LUKOIL, and others, for 1999—2018. Objectives. The aim is to determine the specifics of using the methods of cost, DFC, and comparative approaches to assessing the value of share capital of oil and gas companies. Methods. The study employs methods of statistical analysis and generalization of materials of scientific articles and official annual reports on the results of financial and economic activities of the largest public oil and gas corporations. Results. Based on the results of a comprehensive analysis, I identified advantages and disadvantages of standard approaches to assessing the value of oil and gas producers. Conclusions. The paper describes pros and cons of the said approaches. For instance, the cost approach is acceptable for assessing the minimum cost of small companies in the industry. The DFC-based approach complicates the reliability of medium-term forecasts for oil prices due to fluctuations in oil prices inherent in the industry, on which the net profit and free cash flow of companies depend to a large extent. The comparative approach enables to quickly determine the range of possible value of the corporation based on transactions data and current market situation.


2021 ◽  
Vol 2 (3) ◽  
pp. 1-24
Author(s):  
Chih-Kai Huang ◽  
Shan-Hsiang Shen

The next-generation 5G cellular networks are designed to support the internet of things (IoT) networks; network components and services are virtualized and run either in virtual machines (VMs) or containers. Moreover, edge clouds (which are closer to end users) are leveraged to reduce end-to-end latency especially for some IoT applications, which require short response time. However, the computational resources are limited in edge clouds. To minimize overall service latency, it is crucial to determine carefully which services should be provided in edge clouds and serve more mobile or IoT devices locally. In this article, we propose a novel service cache framework called S-Cache , which automatically caches popular services in edge clouds. In addition, we design a new cache replacement policy to maximize the cache hit rates. Our evaluations use real log files from Google to form two datasets to evaluate the performance. The proposed cache replacement policy is compared with other policies such as greedy-dual-size-frequency (GDSF) and least-frequently-used (LFU). The experimental results show that the cache hit rates are improved by 39% on average, and the average latency of our cache replacement policy decreases 41% and 38% on average in these two datasets. This indicates that our approach is superior to other existing cache policies and is more suitable in multi-access edge computing environments. In the implementation, S-Cache relies on OpenStack to clone services to edge clouds and direct the network traffic. We also evaluate the cost of cloning the service to an edge cloud. The cloning cost of various real applications is studied by experiments under the presented framework and different environments.


2021 ◽  
Vol 18 (3) ◽  
pp. 1-22
Author(s):  
Michael Stokes ◽  
David Whalley ◽  
Soner Onder

While data filter caches (DFCs) have been shown to be effective at reducing data access energy, they have not been adopted in processors due to the associated performance penalty caused by high DFC miss rates. In this article, we present a design that both decreases the DFC miss rate and completely eliminates the DFC performance penalty even for a level-one data cache (L1 DC) with a single cycle access time. First, we show that a DFC that lazily fills each word in a DFC line from an L1 DC only when the word is referenced is more energy-efficient than eagerly filling the entire DFC line. For a 512B DFC, we are able to eliminate loads of words into the DFC that are never referenced before being evicted, which occurred for about 75% of the words in 32B lines. Second, we demonstrate that a lazily word filled DFC line can effectively share and pack data words from multiple L1 DC lines to lower the DFC miss rate. For a 512B DFC, we completely avoid accessing the L1 DC for loads about 23% of the time and avoid a fully associative L1 DC access for loads 50% of the time, where the DFC only requires about 2.5% of the size of the L1 DC. Finally, we present a method that completely eliminates the DFC performance penalty by speculatively performing DFC tag checks early and only accessing DFC data when a hit is guaranteed. For a 512B DFC, we improve data access energy usage for the DTLB and L1 DC by 33% with no performance degradation.


Author(s):  
José-Manuel Giménez-Gómez ◽  
Josep E. Peris ◽  
Begoña Subiza

2012 ◽  
Vol 472-475 ◽  
pp. 3273-3276
Author(s):  
Qing Ying Zhang ◽  
Ying Chi ◽  
Yu Liu ◽  
Qian Shi

The main target of supply chain management is to control inventory of each node enterprise effectively with the minimum cost. In this paper, the control strategies and methods of inventory based on supply chain management are put forward, which are significant for saving the cost of supply chain and improving the overall benefits of the whole chain.


1979 ◽  
Vol 6 (1) ◽  
pp. 120-128
Author(s):  
Craig J. Miller ◽  
Juarez Accioly

Precast, prestressed segmental box-girder bridges are now accepted as an economical alternative for spans over 150 ft (46 m). Decisions about cross-sectional dimensions made during preliminary design can have a substantial influence on the final cost of the bridge. To help the designer obtain an economical starting point for a final design, a program was written to determine section dimensions and midspan and pier prestressing steel areas to give minimum cost. Since a preliminary design is obtained, the analysis techniques and design criteria have been simplified to reduce computation. The design produced by the program will satisfy AASHTO specification requirements and the recommendations of the PCI Bridge Committee. The optimization algorithm used is the generalized reduced gradient technique. To demonstrate the program capabilities, three example problems are discussed. The results indicate that optimum span-depth ratios are approximately 24 for the cost ratios used. The cost of the optimum design does not seem to be too sensitive to the ratio of concrete cost to prestressing steel cost.


Sign in / Sign up

Export Citation Format

Share Document