Effective Distribution of Tasks in Multiprocessor and Multi-Computers Distributed Homogeneous Systems

2021 ◽  
pp. 211-220
Author(s):  
Serhii Zybin ◽  
Vladimir Khoroshko ◽  
Volodymyr Maksymovych ◽  
Ivan Opirskyy

Nowadays, a promising is the direction associated with the use of a large number of processors to solve the resource-intensive tasks. The enormous potential of multiprocessor and multicomputer systems can be fully revealed only when we apply effective methods for organizing the distribution of tasks between processors or computers. However, the problem of efficient distribution of tasks between processors and computers in similar computing systems remains relevant. Two key factors are critical and have an impact on system performance. This is load uniformity and interprocessor or intercomputer interactions. These conflicting factors must be taken into account simultaneously in the distribution of tasks in multiprocessor computing systems. A uniform loading plays a key role in achieving high parallel efficiency, especially in systems with a large number of processors or computers. Efficiency means not only the ability to obtain the result of computations in a finite number of iterations with the necessary accuracy, but also to obtain the result in the shortest possible time. The number of tasks intended for execution on each processor or each computer should be determined so that the execution time is minimal. This study offers a technique that takes into account the workload of computers and intercomputer interactions, and allows one to minimize the execution time of tasks. The technique proposed by the authors allows the comparison of different architectures of computers and computing modules. In this case, a parameter is used that characterizes the behavior of various models with a fixed number of computers, as well as a parameter that is necessary to compare the effectiveness of each computer architecture or computing module when a different number of computers are used. The number of computers can be variable at a fixed workload. The mathematical implementation of this method is based on the problem solution of the mathematical optimization or feasibility.

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1526 ◽  
Author(s):  
Choongmin Kim ◽  
Jacob A. Abraham ◽  
Woochul Kang ◽  
Jaeyong Chung

Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose neurons to map networks onto the crossbars. This paper proposes the k-spare decomposition algorithm that can trade off the predictive performance against the neuron usage during the mapping. The proposed algorithm performs a two-level hierarchical decomposition. In the first global decomposition, it decomposes the neural network such that each crossbar has k spare neurons. These neurons are used to improve the accuracy of the partially mapped network in the subsequent local decomposition. Our experimental results using modern convolutional neural networks show that the proposed method can improve the accuracy substantially within about 10% extra neurons.


2013 ◽  
Vol 8 (1) ◽  
pp. 25-34 ◽  
Author(s):  
Michał Juszczyk ◽  
Agnieszka Leśniak ◽  
Krzysztof Zima

Abstract Conceptual cost estimation is important for construction projects. Either underestimation or overestimation of building raising cost may lead to failure of a project. In the paper authors present application of a multicriteria comparative analysis (MCA) in order to select factors influencing residential building raising cost. The aim of the analysis is to indicate key factors useful in conceptual cost estimation in the early design stage. Key factors are being investigated on basis of the elementary information about the function, form and structure of the building, and primary assumptions of technological and organizational solutions applied in construction process. The mentioned factors are considered as variables of the model which aim is to make possible conceptual cost estimation fast and with satisfying accuracy. The whole analysis included three steps: preliminary research, choice of a set of potential variables and reduction of this set to select the final set of variables. Multicriteria comparative analysis is applied in problem solution. Performed analysis allowed to select group of factors, defined well enough at the conceptual stage of the design process, to be used as a describing variables of the model.


1974 ◽  
Vol 3 (32) ◽  
Author(s):  
L. Phillip Caillouet ◽  
Bruce D. Shriver

This paper offers an introduction to a research effort in fault tolerant computer architecture which has been organized at the University of Southwestern Louisiana (USL). It is intended as an overview of several topics which have been isolated for study, and as an indication of preliminary undertakings with regards to one particular topic. This first area of concentration lnvolves the systematic design of fault tolerant computing systems via a multi-level approach. Efforts are being initiated also in the areas of diagnosis of microprogrammable processors via firmware, fault data management across levels of virtual machines, development of a methodology for realizing a firmware hardcore on a variety of hosts, and delineation of a minimal set of resources for the design of a practical host for a multi-level fault tolerant computing system. The research is being conducted under the auspices of Project Beta at USL.


2016 ◽  
Vol 6 (3) ◽  
pp. 248-254
Author(s):  
Коновальчук ◽  
Evgeniy Konovalchuk ◽  
Коновалов ◽  
Oleg Konovalov ◽  
Сербулов ◽  
...  

The solution of two problems is considered for this purpose in the article: the achievements of a conditional minimum and the achievement of an absolute minimum according to the devel-oped algorithms. At the same time the problem solution of achieving the conditional minimum is reduced to theproblem of integer quadratic programming and the problem of achieving the abso-lute minimum – to minimization of the utility function in the field of the set restrictions and to the search of the fixed number of variables.


2013 ◽  
Vol 65 (2) ◽  
pp. 886-902 ◽  
Author(s):  
Hong Jun Choi ◽  
Dong Oh Son ◽  
Seung Gu Kang ◽  
Jong Myon Kim ◽  
Hsien-Hsin Lee ◽  
...  

2019 ◽  
Vol 51 (1) ◽  
pp. 77-95 ◽  
Author(s):  
Amir Ahmadi-Javid ◽  
Seyed Hamed Fateminia ◽  
Hans Georg Gemünden

To improve the effectiveness of project portfolio risk management, a portfolio-wide approach is required. Implementing a proactive strategy, this article presents a method based on mathematical optimization to select an appropriate set of a priori local and global responses to address risks that threaten a project portfolio considering key factors, such as cost, budget, project preference weights, risk-event probabilities, interdependencies among work packages, and both occurrence and impact dependencies among risk events. As the proposed method has new features compared to the existing methods developed for a single project, it can also be used in project risk management.


2010 ◽  
Vol 19 (07) ◽  
pp. 1543-1557
Author(s):  
WEI HU ◽  
TIANZHOU CHEN ◽  
QINGSONG SHI ◽  
SHA LIU

Multithreaded programming has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors. The performance bottleneck of a multithreaded program is its critical path, whose length is its total execution time. As the number of cores within a processor increases, Network-on-Chip (NoC) has been proposed as a promising approach for inter-core communication. In order to optimize the performance of a multithreaded program running on an NoC based multi-core platform, we design and implement the critical-path driven router, which prioritizes inter-thread communication on the critical path when routing packets. The experimental results show that the critical-path driven router improves the execution time of the test case by 14.8% compared to the ordinary router.


2013 ◽  
Vol 2013 (1) ◽  
pp. 000753-000757
Author(s):  
Thomas A. Wassick

Over the past few years, lead - free solder interconnects have been significantly incorporated into electronic products, and are increasingly found in high performance computing systems and in their associated power electronics. As power and current levels increase within these products, the overall reliability of a lead-free solder based system can be impacted by an increasing risk of finding electromigration (EM) degradation during the product lifetime, especially if the product is operating at higher temperatures and with very high current densities. This paper provides a high-level technical overview of lead-free electromigration and describes the key factors and issues that can influence the EM performance of lead-free interconnects, especially in the environments in which power electronics are typically found.


Sign in / Sign up

Export Citation Format

Share Document