workload allocation
Recently Published Documents


TOTAL DOCUMENTS

115
(FIVE YEARS 41)

H-INDEX

13
(FIVE YEARS 3)

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Xinyue Hu ◽  
Xiaoke Tang ◽  
Yantao Yu ◽  
Sihai Qiu ◽  
Shiyong Chen

The introduction of mobile edge computing (MEC) in vehicular network has been a promising paradigm to improve vehicular services by offloading computation-intensive tasks to the MEC server. To avoid the overload phenomenon in MEC server, the vast idle resources of parked vehicles can be utilized to effectively relieve the computational burden on the server. Furthermore, unbalanced load allocation may cause larger latency and energy consumption. To solve the problem, the reported works preferred to allocate workload between MEC server and single parked vehicle. In this paper, a multiple parked vehicle-assisted edge computing (MPVEC) paradigm is first introduced. A joint load balancing and offloading optimization problem is formulated to minimize the system cost under delay constraint. In order to accomplish the offloading tasks, a multiple offloading node selection algorithm is proposed to select several appropriate PVs to collaborate with the MEC server in computing tasks. Furthermore, a workload allocation strategy based on dynamic game is presented to optimize the system performance with jointly considering the workload balance among computing nodes. Numerical results indicate that the offloading strategy in MPVEC scheme can significantly reduce the system cost and load balancing of the system can be achieved.


Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1750
Author(s):  
Manho Kim ◽  
Sung-Ho Kim ◽  
Hyuk-Jae Lee ◽  
Chae-Eun Rhee

Since the advent of computers, computing performance has been steadily increasing. Moreover, recent technologies are mostly based on massive data, and the development of artificial intelligence is accelerating it. Accordingly, various studies are being conducted to increase the performance and computing and data access, together reducing energy consumption. In-memory computing (IMC) and in-storage computing (ISC) are currently the most actively studied architectures to deal with the challenges of recent technologies. Since IMC performs operations in memory, there is a chance to overcome the memory bandwidth limit. ISC can reduce energy by using a low power processor inside storage without an expensive IO interface. To integrate the host CPU, IMC and ISC harmoniously, appropriate workload allocation that reflects the characteristics of the target application is required. In this paper, the energy and processing speed are evaluated according to the workload allocation and system conditions. The proof-of-concept prototyping system is implemented for the integrated architecture. The simulation results show that IMC improves the performance by 4.4 times and reduces total energy by 4.6 times over the baseline host CPU. ISC is confirmed to significantly contribute to energy reduction.


2021 ◽  
pp. 1-13
Author(s):  
T. Renugadevi ◽  
K. Geetha

Management of IT services is rapidly adapting to the cloud computing environment due to optimized service delivery models. Geo distributed cloud data centers act as a backbone for providing fundamental infrastructure for cloud services delivery. Conversely, their high growing energy consumption rate is the major problem to be addressed. Cloud providers are in a hunger to identify different solutions to tackle energy management and carbon emission. In this work, a multi-cloud environment is modeled as geographically distributed data centers with varying solar power generation corresponding to its location, electricity price, carbon emission, and carbon tax. The energy management of the workload allocation algorithm is strongly dependent on the nature of the application considered. The task deadline and brownout information is used to bring in variation in task types. The renewable energy-aware workload allocation algorithm adaptive to task nature is proposed with migration policy to explore its impact on carbon emission, total energy cost, brown and renewable power consumption.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Zhenquan Qin ◽  
Zanping Cheng ◽  
Chuan Lin ◽  
Zhaoyi Lu ◽  
Lei Wang

By deploying edge servers on the network edge, mobile edge computing network strengthens the real-time processing ability near the end devices and releases the huge load pressure of the core network. Considering the limited computing or storage resources on the edge server side, the workload allocation among edge servers for each Internet of Things (IoT) application affects the response time of the application’s requests. Hence, when the access devices of the edge server are deployed intensively, the workload allocation becomes a key factor affecting the quality of user experience (QoE). To solve this problem, this paper proposes an edge workload allocation scheme, which uses application prediction (AP) algorithm to minimize response delay. This problem has been proved to be a NP hard problem. First, in the application prediction model, long short-term memory (LSTM) method is proposed to predict the tasks of future access devices. Second, based on the prediction results, the edge workload allocation is divided into two subproblems to solve, which are the task assignment subproblem and the resource allocation subproblem. Using historical execution data, we can solve the problem in linear time. The simulation results show that the proposed AP algorithm can effectively reduce the response delay of the device and the average completion time of the task sequence and approach the theoretical optimal allocation results.


2021 ◽  
Author(s):  
SOMA PRATHIBHA ◽  
B. Latha ◽  
V. Vijaykumar

Abstract Lot of scientific problems in various domains from modelling sky as mosaics to understand Genome sequencing in biological applications are modelled as workflows with large number of interconnected tasks. Particle Swarm Optimization (PSO) based metaheuristics are currently used to address many optimization problems as they are simple to implement and able to produce quickly optimal or sub-optimal solutions based on learning capabilities. Even though many works are cited in the literature on workflow scheduling, most of the existing works are focused on reducing the makespan alone. Moreover, energy efficiency is considered only in few works included in the literature. Constraints about the dynamic workload allocation are not introduced in the existing systems. Moreover, the optimization techniques used in the existing systems have improved the QoS with little scalability in the cloud environment since they consider only the infrastructure as the service model.In this work a new algorithm has been proposed based on the proposal of a new Multi-Objective Optimization model called F-NSPSO using NSPSO Meta-Heuristic s. This method allows the user to choose a suitable configuration dynamically. An average of above 15% in the energy reduction for the proposed system over simple DVFS was achieved for all types of workflow applications with different dimensions. Similarly when compared to NSPSO an energy reduction of at least 10% has been observed for F-NSPSO for all three types of workflow applications. Compared to NSPSO algorithm F-NSPSO algorithm shows at least 13%, 12% and 21% improvement in average makespan for Montage, Cybershake and Epigenomics workflow applications respectively.


Sign in / Sign up

Export Citation Format

Share Document