job scheduling
Recently Published Documents


TOTAL DOCUMENTS

1558
(FIVE YEARS 465)

H-INDEX

41
(FIVE YEARS 10)

Author(s):  
Hongwen Xia

When identifying the enterprise database data information, the enterprise database operational dimension measurement platform based on the bilinear method will produce ringing or overshoot effect, the database data monitoring effect is poor as well as the accuracy of operation measurement. To solve this problem, this paper proposes a method to build an enterprise level database operation and maintenance measurement platform based on bidirectional coupling algorithm. Build enterprise database operation measurement platform. The enterprise database monitoring platform is connected with the monitoring database by remote database chain. The Oracle database job scheduling method is used to obtain the monitoring index information in the monitored database, and the database memory performance is comprehensively evaluated by MPI (Memory Perform Index). The platform uses semantic capture layer and related analysis layer to distinguish user behavior, analyze user experience satisfaction, and realize the operation measurement of enterprise level database. The experimental results show that the operational measurement platform built by this method has high throughput, low memory occupancy rate, high measurement accuracy and good user experience.


Author(s):  
Dhananjay Thiruvady ◽  
Su Nguyen ◽  
Fatemeh Shiri ◽  
Nayyar Zaidi ◽  
Xiaodong Li

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Mohamed Abd Elaziz ◽  
Laith Abualigah ◽  
Rehab Ali Ibrahim ◽  
Ibrahim Attiya

Instead of the cloud, the Internet of things (IoT) activities are offloaded into fog computing to boost the quality of services (QoSs) needed by many applications. However, the availability of continuous computing resources on fog computing servers is one of the restrictions for IoT applications since transmitting the large amount of data generated using IoT devices would create network traffic and cause an increase in computational overhead. Therefore, task scheduling is the main problem that needs to be solved efficiently. This study proposes an energy-aware model using an enhanced arithmetic optimization algorithm (AOA) method called AOAM, which addresses fog computing’s job scheduling problem to maximize users’ QoSs by maximizing the makespan measure. In the proposed AOAM, we enhanced the conventional AOA searchability using the marine predators algorithm (MPA) search operators to address the diversity of the used solutions and local optimum problems. The proposed AOAM is validated using several parameters, including various clients, data centers, hosts, virtual machines, tasks, and standard evaluation measures, including the energy and makespan. The obtained results are compared with other state-of-the-art methods; it showed that AOAM is promising and solved task scheduling effectively compared with the other comparative methods.


Author(s):  
Lin Yang ◽  
Ali Zeynali ◽  
Mohammad H. Hajiesmaili ◽  
Ramesh K. Sitaraman ◽  
Don Towsley

In this paper, we study the online multidimensional knapsack problem (called OMdKP) in which there is a knapsack whose capacity is represented in m dimensions, each dimension could have a different capacity. Then, n items with different scalar profit values and m-dimensional weights arrive in an online manner and the goal is to admit or decline items upon their arrival such that the total profit obtained by admitted items is maximized and the capacity of knapsack across all dimensions is respected. This is a natural generalization of the classic single-dimension knapsack problem and finds several relevant applications such as in virtual machine allocation, job scheduling, and all-or-nothing flow maximization over a graph. We develop two algorithms for OMdKP that use linear and exponential reservation functions to make online admission decisions. Our competitive analysis shows that the linear and exponential algorithms achieve the competitive ratios of O(θα ) and O(łogł(θα)), respectively, where α is the ratio between the aggregate knapsack capacity and the minimum capacity over a single dimension and θ is the ratio between the maximum and minimum item unit values. We also characterize a lower bound for the competitive ratio of any online algorithm solving OMdKP and show that the competitive ratio of our algorithm with exponential reservation function matches the lower bound up to a constant factor.


2021 ◽  
Author(s):  
Sirivan Chaleunxay ◽  
Nikhil Shah

Abstract Understanding the earth's subsurface is critical to the needs of the exploration and production (E&P) industry for minimizing risk and maximizing recovery. Until recently, the industry's service sector has not made many advances in data-driven automated earth model building from raw exploration seismic data. But thankfully, that has now changed. The industry's leading technique to gain an unprecedented increase in resolution and accuracy when establishing a view of the interior of the earth is known as the Full Waveform Inversion (FWI). Advanced formulations of FWI are capable of automating subsurface model building using only raw unprocessed data. Cloud-based FWI is helping to accelerate this journey by encompassing the most sophisticated waveform inversion techniques with the largest compute facility on the planet. This combines to give verifiable accuracy, more automation and more efficiency. In this paper, we describe the transformation of enabling cloud-based FWI to natively take advantage of the public cloud platform's main strength in terms of flexibility and on-demand scalability. We start from lift-and-shift of a legacy MPI-based application designed to be run by a traditional on-prem job scheduler. Our specific goals are to (1) utilize a heterogeneous set of compute hardware throughout the lifecycle of a production FWI run without having to provision them for the entire duration, (2) take advantage of cost-efficient spare-capacity compute instances without uptime guarantees, and (3) maintain a single codebase that can be run both on on-prem HPC systems and on the cloud. To achieve these goals meant transitioning the job-scheduling and "embarrassingly parallel" aspects of the communication code away from using MPI, and onto various cloud-based orchestration systems, as well as finding cloud-based solutions that worked and scaled well for the broadcast/reduction operation. Placing these systems behind a customized TCP-based stub for MPI library calls allows us to run the code as-is in an on-prem HPC environment, while on the cloud we can asynchronously provision and suspend worker instances (potentially with very different hardware configurations) as needed without the burden of maintaining a static MPI world communicator. With this dynamic cloud-native architecture, we 1) utilize advanced formulations of FWI capable of automating subsurface model building using only raw unprocessed data, 2) extract velocity models from the full recorded wavefield (refractions, reflections and multiples), and 3) introduce explicit sensitivity to reflection moveout, invisible to conventional FWI, for macro-model updates below the diving wave zone. This makes it viable to go back to older legacy datasets acquired in complex environments and unlock considerable value where FWI until now has been impossible to apply successfully from a poor starting model.


Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2270
Author(s):  
Sina Zangbari Koohi ◽  
Nor Asilah Wati Abdul Hamid ◽  
Mohamed Othman ◽  
Gafurjan Ibragimov

High-performance computing comprises thousands of processing powers in order to deliver higher performance computation than a typical desktop computer or workstation in order to solve large problems in science, engineering, or business. The scheduling of these machines has an important impact on their performance. HPC’s job scheduling is intended to develop an operational strategy which utilises resources efficiently and avoids delays. An optimised schedule results in greater efficiency of the parallel machine. In addition, processes and network heterogeneity is another difficulty for the scheduling algorithm. Another problem for parallel job scheduling is user fairness. One of the issues in this field of study is providing a balanced schedule that enhances efficiency and user fairness. ROA-CONS is a new job scheduling method proposed in this paper. It describes a new scheduling approach, which is a combination of an updated conservative backfilling approach further optimised by the raccoon optimisation algorithm. This algorithm also proposes a technique of selection that combines job waiting and response time optimisation with user fairness. It contributes to the development of a symmetrical schedule that increases user satisfaction and performance. In comparison with other well-known job scheduling algorithms, the simulation assesses the effectiveness of the proposed method. The results demonstrate that the proposed strategy offers improved schedules that reduce the overall system’s job waiting and response times.


Sign in / Sign up

Export Citation Format

Share Document