scholarly journals Performance-Aware Scheduling of Parallel Applications on Non-Dedicated Clusters

Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 982 ◽  
Author(s):  
Alberto Cascajo ◽  
David E. Singh ◽  
Jesus Carretero

This work presents a HPC framework that provides new strategies for resource management and job scheduling, based on executing different applications in shared compute nodes, maximizing platform utilization. The framework includes a scalable monitoring tool that is able to analyze the platform’s compute node utilization. We also introduce an extension of CLARISSE, a middleware for data-staging coordination and control on large-scale HPC platforms that uses the information provided by the monitor in combination with application-level analysis to detect performance degradation in the running applications. This degradation, caused by the fact that the applications share the compute nodes and may compete for their resources, is avoided by means of dynamic application migration. A description of the architecture, as well as a practical evaluation of the proposal, shows significant performance improvements up to 20% in the makespan and 10% in energy consumption compared to a non-optimized execution.

2020 ◽  
Vol 70 (1) ◽  
pp. 60-65 ◽  
Author(s):  
Goran Marković ◽  
Vlada Sokolović

Networks with distributed sensors, e.g. cognitive radio networks or wireless sensor networks enable large-scale deployments of cooperative automatic modulation classification (AMC). Existing cooperative AMC schemes with centralised fusion offer considerable performance increase in comparison to single sensor reception. Previous studies were generally focused on AMC scenarios in which multipath channel is assumed to be static during a signal reception. However, in practical mobile environments, time-correlated multipath channels occur, which induce large negative influence on the existing cooperative AMC solutions. In this paper, we propose two novel cooperative AMC schemes with the additional intra-sensor fusion, and show that these offer significant performance improvements over the existing ones under given conditions.


2021 ◽  
Author(s):  
Dilshad Hassan Sallo ◽  
Gabor Kecskemeti

Discrete Event Simulation (DES) frameworks gained significant popularity to support and evaluate cloud computing environments. They support decision-making for complex scenarios, saving time and effort. The majority of these frameworks lack parallel execution. In spite being a sequential framework, DISSECT-CF introduced significant performance improvements when simulating Infrastructure as a Service (IaaS) clouds. Even with these improvements over the state of the art sequential simulators, there are several scenarios (e.g., large scale Internet of Things or serverless computing systems) which DISSECT-CF would not simulate in a timely fashion. To remedy such scenarios this paper introduces parallel execution to its most abstract subsystem: the event system. The new event subsystem detects when multiple events occur at a specific time instance of the simulation and decides to execute them either on a parallel or a sequential fashion. This decision is mainly based on the number of independent events and the expected workload of a particular event. In our evaluation, we focused exclusively on time management scenarios. While we did so, we ensured the behaviour of the events should be equivalent to realistic, larger-scale simulation scenarios. This allowed us to understand the effects of parallelism on the whole framework, while we also shown the gains of the new system compared to the old sequential one. With regards to scaling, we observed it to be proportional to the number of cores in the utilised SMP host.


2014 ◽  
Vol 14 (4-5) ◽  
pp. 553-567 ◽  
Author(s):  
TERRANCE SWIFT

AbstractResolution-based Knowledge Representation and Reasoning (KRR) systems, such as Flora-2, Silk or Ergo, can scale to tens or hundreds of millions of facts, while supporting reasoning that includes Hilog, inheritance, defeasibility theories, and equality theories. These systems handle the termination and complexity issues that arise from the use of these features by a heavy use of tabled resolution. In fact, such systems table by default all rules defined by users, unless they are simple facts.Performing dynamic updates within such systems is nearly impossible unless the tables themselves can be made to react to changes. Incremental tabling as first implemented in XSB (Saha 2006) partially addressed this problem, but the implementation was limited in scope and not always easy to use. In this paper, we introducetransparent incremental tablingwhich at the semantic level supports updates in the 3-valued well-founded semantics, while guaranteeing full consistency of all tabled queries. Transparent incremental tabling also has significant performance improvements over previous implementations, including lazy recomputation, and control over the dependency structures used to determine how tables are updated.


2012 ◽  
Vol 2012 ◽  
pp. 1-18 ◽  
Author(s):  
Xiaocheng Liu ◽  
Bin Chen ◽  
Xiaogang Qiu ◽  
Ying Cai ◽  
Kedi Huang

An increasing number of high performance computing parallel applications leverages the power of the cloud for parallel processing. How to schedule the parallel applications to improve the quality of service is the key to the successful host of parallel applications in the cloud. The large scale of the cloud makes the parallel job scheduling more complicated as even simple parallel job scheduling problem is NP-complete. In this paper, we propose a parallel job scheduling algorithm named MEASY. MEASY adopts migration and consolidation to enhance the most popular EASY scheduling algorithm. Our extensive experiments on well-known workloads show that our algorithm takes very good care of the quality of service. For two common parallel job scheduling objectives, our algorithm produces an up to 41.1% and an average of 23.1% improvement on the average response time; an up to 82.9% and an average of 69.3% improvement on the average slowdown. Our algorithm is robust even in terms that it allows inaccurate CPU usage estimation and high migration cost. Our approach involves trivial modification on EASY and requires no additional technique; it is practical and effective in the cloud environment.


Author(s):  
Juntao Li ◽  
Ruidan He ◽  
Hai Ye ◽  
Hwee Tou Ng ◽  
Lidong Bing ◽  
...  

Recent research indicates that pretraining cross-lingual language models on large-scale unlabeled texts yields significant performance improvements over various cross-lingual and low-resource tasks. Through training on one hundred languages and terabytes of texts, cross-lingual language models have proven to be effective in leveraging high-resource languages to enhance low-resource language processing and outperform monolingual models. In this paper, we further investigate the cross-lingual and cross-domain (CLCD) setting when a pretrained cross-lingual language model needs to adapt to new domains. Specifically, we propose a novel unsupervised feature decomposition method that can automatically extract domain-specific features and domain-invariant features from the entangled pretrained cross-lingual representations, given unlabeled raw texts in the source language. Our proposed model leverages mutual information estimation to decompose the representations computed by a cross-lingual model into domain-invariant and domain-specific parts. Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.


2001 ◽  
Author(s):  
Bradley Olson ◽  
Leonard Jason ◽  
Joseph R. Ferrari ◽  
Leon Venable ◽  
Bertel F. Williams ◽  
...  

2020 ◽  
Vol 39 (4) ◽  
pp. 5449-5458
Author(s):  
A. Arokiaraj Jovith ◽  
S.V. Kasmir Raja ◽  
A. Razia Sulthana

Interference in Wireless Sensor Network (WSN) predominantly affects the performance of the WSN. Energy consumption in WSN is one of the greatest concerns in the current generation. This work presents an approach for interference measurement and interference mitigation in point to point network. The nodes are distributed in the network and interference is measured by grouping the nodes in the region of a specific diameter. Hence this approach is scalable and isextended to large scale WSN. Interference is measured in two stages. In the first stage, interference is overcome by allocating time slots to the node stations in Time Division Multiple Access (TDMA) fashion. The node area is split into larger regions and smaller regions. The time slots are allocated to smaller regions in TDMA fashion. A TDMA based time slot allocation algorithm is proposed in this paper to enable reuse of timeslots with minimal interference between smaller regions. In the second stage, the network density and control parameter is introduced to reduce interference in a minor level within smaller node regions. The algorithm issimulated and the system is tested with varying control parameter. The node-level interference and the energy dissipation at nodes are captured by varying the node density of the network. The results indicate that the proposed approach measures the interference and mitigates with minimal energy consumption at nodes and with less overhead transmission.


Author(s):  
О. Кravchuk ◽  
V. Symonenkov ◽  
I. Symonenkova ◽  
O. Hryhorev

Today, more than forty countries of the world are engaged in the development of military-purpose robots. A number of unique mobile robots with a wide range of capabilities are already being used by combat and intelligence units of the Armed forces of the developed world countries to conduct battlefield intelligence and support tactical groups. At present, the issue of using the latest information technology in the field of military robotics is thoroughly investigated, and the creation of highly effective information management systems in the land-mobile robotic complexes has acquired a new phase associated with the use of distributed information and sensory systems and consists in the transition from application of separate sensors and devices to the construction of modular information subsystems, which provide the availability of various data sources and complex methods of information processing. The purpose of the article is to investigate the ways to increase the autonomy of the land-mobile robotic complexes using in a non-deterministic conditions of modern combat. Relevance of researches is connected with the necessity of creation of highly effective information and control systems in the perspective robotic means for the needs of Land Forces of Ukraine. The development of the Armed Forces of Ukraine management system based on the criteria adopted by the EU and NATO member states is one of the main directions of increasing the effectiveness of the use of forces (forces), which involves achieving the principles and standards necessary for Ukraine to become a member of the EU and NATO. The inherent features of achieving these criteria will be the transition to a reduction of tasks of the combined-arms units and the large-scale use of high-precision weapons and land remote-controlled robotic devices. According to the views of the leading specialists in the field of robotics, the automation of information subsystems and components of the land-mobile robotic complexes can increase safety, reliability, error-tolerance and the effectiveness of the use of robotic means by standardizing the necessary actions with minimal human intervention, that is, a significant increase in the autonomy of the land-mobile robotic complexes for the needs of Land Forces of Ukraine.


Sign in / Sign up

Export Citation Format

Share Document