An Efficient Multi-Core Resource Allocation using the Multi-Level Objective Functions in Cloud Environment

2020 ◽  
Vol 13 (5) ◽  
pp. 957-964
Author(s):  
Siva Rama Krishna ◽  
Mohammed Ali Hussain

Background: In recent years, the computational memory and energy conservation have become a major problem in cloud computing environment due to the increase in data size and computing resources. Since, most of the different cloud providers offer different cloud services and resources use limited number of user’s applications. Objective: The main objective of this work is to design and implement a cloud resource allocation and resources scheduling model in the cloud environment. Methods: In the proposed model, a novel cloud server to resource management technique is proposed on real-time cloud environment to minimize the cost and time. In this model different types of cloud resources and its services are scheduled using multi-level objective constraint programming. Proposed cloud server-based resource allocation model is based on optimization functions to minimize the resource allocation time and cost. Results: Experimental results proved that the proposed model has high computational resource allocation time and cost compared to the existing resource allocation models. Conclusion: This cloud service and resource optimization model is efficiently implemented and tested in real-time cloud instances with different types of services and resource sets.

2015 ◽  
Vol 2015 ◽  
pp. 1-8
Author(s):  
Zhe Zhang ◽  
Ying Li

Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as anM/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.


2020 ◽  
Vol 26 (8) ◽  
pp. 83-99
Author(s):  
Sarah Haider Abdulredah ◽  
Dheyaa Jasim Kadhim

A Tonido cloud server provides a private cloud storage solution and synchronizes customers and employees with the required cloud services over the enterprise. Generally, access to any cloud services by users is via the Internet connection, which can face some problems, and then users may encounter in accessing these services due to a weak Internet connection or heavy load sometimes especially with live video streaming applications overcloud. In this work, flexible and inexpensive proposed accessing methods are submitted and implemented concerning real-time applications that enable users to access cloud services locally and regionally. Practically, to simulate our network connection, we proposed to use the Raspberry-pi3 model B+ as a router wireless LAN (WLAN) that enables users to have the cloud services using different access approaches such as wireless and wireline connections. As a case study for a real-time application over the cloud server, it is suggested to do a live video streaming using an IP webcam and IVIDEON cloud where the streaming video can be accessed via the cloud server at any time with different users taking into account the proposed practical connections. Practical experiments showed and proved that accessing real-time applications of cloud services via wireline and wireless connections is improved by using Tonido cloud server's facilities.


Author(s):  
Abdulelah Alwabel ◽  
Robert John Walters ◽  
Gary B. Wills

Cloud computing is a new paradigm that promises to move IT a step further towards utility computing, in which computing services are delivered as a utility service. Traditionally, Cloud employs dedicated resources located in one or more data centres in order to provide services to clients. Desktop Cloud computing is a new type of Cloud computing that aims at providing Cloud capabilities at low or no cost. Desktop Clouds harness non dedicated and idle resources in order to provide Cloud services. However, the nature of such resources can be problematic because they are prone to failure at any time without prior notice. This research focuses on the resource allocation mechanism in Desktop Clouds.The contributions of this chapter are threefold. Firstly, it defines and explains Desktop Clouds by comparing them with both Traditional Clouds and Desktop Grids. Secondly, the paper discusses various research issues in Desktop Clouds. Thirdly, it proposes a resource allocation model that is able to handle node failures.


2018 ◽  
Vol 7 (3.12) ◽  
pp. 740
Author(s):  
S Kumaresan ◽  
Sumithra Devi.K.A

In Software technology stackCloud services provides easy coupling implementation to enhance encapsulation data between multiple platform data exchanges. My finding towards  introducing High Availability Architecture for cloud environment which covers Load Balancing, Failover, High Availability Resources. To achieve thisfeatures it’s identified framework architecture which is called as Dynamic High Availability Architecture Framework for SOA Computing which increase cloud services standard inhigh witheasy adaptable security. Even though cloud service supports loose coupling and isolation business logics. At current cloud service provide wants to launch new web service request on fly same service will notnotified into client in real-time scenario.  To overcome this complicated situation we have introduced (GHAFC) Generic Architecture Framework in Cloud Computing. Which will support data exchanges between producer and consumer onthe fly with real time scenario.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Yanyan Wang ◽  
Baiqing Sun

Efficiency and fairness are two important goals of disaster rescue. However, the existing models usually unilaterally consider the efficiency or fairness of resource allocation. Based on this, a multiobjective emergency resource allocation model that can balance efficiency and fairness is proposed. The object of the proposed model is to minimize the total allocating costs of resources and the total losses caused by insufficient resources. Then the particle swarm optimization is applied to solve the model. Finally, a computational example is conducted based on the emergency relief resource allocation after Ya’an earthquake in China to verify the applicability of the proposed model.


Author(s):  
Fereshteh Hoseini ◽  
Mostafa Ghobaei Arani ◽  
Alireza Taghizadeh

<p class="Abstract">By increasing the use of cloud services and the number of requests to processing tasks with minimum time and costs, the resource allocation and scheduling, especially in real-time applications become more challenging. The problem of resource scheduling, is one of the most important scheduling problems in the area of NP-hard problems. In this paper, we propose an efficient algorithm is proposed to schedule real-time cloud services by considering the resource constraints. The simulation results show that the proposed algorithm shorten the processing time of tasks and decrease the number of canceled tasks.</p>


2018 ◽  
Vol 17 ◽  
pp. 03016
Author(s):  
Qilin Li ◽  
Chuanliang Jia ◽  
Jiu Su

On aglobal scale, the occurrence of different types of emergencies has had a tremendous impact on the economies and people's lives. The optimization of emergency human resource allocation is increasingly important. This paper gives full consideration to the control targets of each fire rescue points and the demands of both demand points and potential demand points. We build an emergency human resource allocation model and optimize it through the collaborative optimization. This paper finally carried on the case analysis to verify the feasibility of the model. The model better simulates the reality and can be referred by some government officials in some emergency cases.


2020 ◽  
Vol 9 (1) ◽  
pp. 2064-2071

The important goal of cloud computing is to offer larger data center that satisfies the storage requirements of the customer. The entire data can’t be saved in a single server. Cloud provider (CP) has cluster of servers to fulfill the cloud request from various real time applications. The data is fragmented in multiple servers to maintain availability. Since the data request of a customer needs data from various servers, there is a possibility of attaining dead lock. In this paper, an enhanced queuing model is proposed where the cloud request (CR) is received in queuing manner for allocation of resources. A session is created for the CR with the CP resource allocation from cloud severs. This enables to put constraint on the number of CR making a session with CP to avoid resource suppression. The Wait for Resource algorithm is used for allocating the server resources to a CR without deadlock in a session. This enables to forecast the resource requirements prior to resource allocation phase in a session. This makes the dynamic resource allocation efficient and free of deadlock. The results obtained evaluates the proposed model and helps the CP in dynamically choosing the number of server nodes necessary to achieve better performance for an real time application.


Author(s):  
Elnaz Peyghaleh ◽  
Tarek Alkhrdaji

Abstract History of earthquake’s damages have illustrated the high vulnerability and risks associated with failure of water transfer and distribution systems. Adequate mitigation plans to reduce such seismic risks are required for sustainable development. The first step in developing a mitigation plan is prioritizing the limited available budget to address the most critical mitigation measures. This paper presents an optimization model that can be utilized for financial resource allocation towards earthquake risk mitigation measures for water pipelines. It presents a framework that can be used by decision-makers (authorities, stockholders, owners and contractors) to structure budget allocation strategy for seismic risk mitigation measures such as repair, retrofit, and/or replacement of steel and concrete pipelines. A stochastic model is presented to establish optimal mitigation measures based on minimizing repair and retrofit costs, post-earthquake replacement costs, and especially earthquake-induced large losses. To consider the earthquake induced loss on pipelines, the indirect loss due to water shortage and business interruption in the industries which needs water is also considered. The model is applied to a pilot area to demonstrate the practical application aspects of the proposed model. Pipeline exposure database, built environment occupancy type, pipeline vulnerability functions, and regional seismic hazard characteristics are used to calculate a probabilistic seismic risk for the pilot area. The Global Earthquake Model’s (GEM) OpenQuake software is used to run various seismic risk analysis. Event-based seismic hazard and risk analyses are used to develop the hazard curves and maps in terms of peak ground velocity (PGV) for the study area. The results of this study show the variation of seismic losses and mitigation costs for pipelines located within the study area based on their location and the types of repair. Performing seismic risk analysis analyses using the proposed model provides a valuable tool for determining the risk associated with a network of pipelines in a region, and the costs of repair based on acceptable risk level. It can be used for decision making and to establish type and budgets for most critical repairs for a specific region.


Sign in / Sign up

Export Citation Format

Share Document