multiple processors
Recently Published Documents


TOTAL DOCUMENTS

136
(FIVE YEARS 13)

H-INDEX

19
(FIVE YEARS 2)

2022 ◽  
Vol 27 (3) ◽  
pp. 1-23
Author(s):  
Mari-Liis Oldja ◽  
Jangryul Kim ◽  
Dowhan Jeong ◽  
Soonhoi Ha

Although dataflow models are known to thrive at exploiting task-level parallelism of an application, it is difficult to exploit the parallelism of data, represented well with loop structures, since these structures are not explicitly specified in existing dataflow models. SDF/L model overcomes this shortcoming by specifying the loop structures explicitly in a hierarchical fashion. We introduce a scheduling technique of an application represented by the SDF/L model onto heterogeneous processors. In the proposed method, we explore the mapping of tasks using an evolutionary meta-heuristic and schedule hierarchically in a bottom-up fashion, creating parallel loop schedules at lower levels first and then re-using them when constructing the schedule at a higher level. The efficiency of the proposed scheduling methodology is verified with benchmark examples and randomly generated SDF/L graphs.


2021 ◽  
Vol 11 (16) ◽  
pp. 7465
Author(s):  
Sabeen Masood ◽  
Shoab Ahmed Khan ◽  
Ali Hassan ◽  
Urooj Fatima

Recent years has seen a tremendous increase in processing requirements of present-day embedded system applications. Embedded systems consist of multiple processing elements (PEs) connected to each other using different types of interfaces. Many complicated tasks are accomplished by embedded systems in varied settings, which may introduce errors during inter-processor communication. Testing such systems is tremendously difficult and challenging from testing non-real time systems. A major part of testing real time embedded systems involves ensuring accuracy and timing in synchronous inter-process communication More specifically, the synchronization and inter-processor communication of real-time applications makes testing a challenging task and due to the demand for higher data rate increases, day-by-day, making testing of such systems even more complex. This paper presents a novel frame work that uses multiple instances of simulators with physical high-speed serial interfaces to emulate any real time embedded system communication. The framework presents a testing technique that detects all faults related to synchronization of high-speed synchronous serial interfaces in a systematic manner. The novelty of our approach is to simulate communication across multiple processors in a simulation environment for detecting and localizing bugs. We verify this framework using a case study consisting of an embedded software defined radio (SDR) system. The test results show the applicability of our approach in fixing bugs that relates to synchronization issues that otherwise are very hard to find and fix in very complicated systems, such as SDR.


Radiation ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 79-94
Author(s):  
Peter K. Rogan ◽  
Eliseos J. Mucaki ◽  
Ben C. Shirley ◽  
Yanxin Li ◽  
Ruth C. Wilkins ◽  
...  

The dicentric chromosome (DC) assay accurately quantifies exposure to radiation; however, manual and semi-automated assignment of DCs has limited its use for a potential large-scale radiation incident. The Automated Dicentric Chromosome Identifier and Dose Estimator (ADCI) software automates unattended DC detection and determines radiation exposures, fulfilling IAEA criteria for triage biodosimetry. This study evaluates the throughput of high-performance ADCI (ADCI-HT) to stratify exposures of populations in 15 simulated population scale radiation exposures. ADCI-HT streamlines dose estimation using a supercomputer by optimal hierarchical scheduling of DC detection for varying numbers of samples and metaphase cell images in parallel on multiple processors. We evaluated processing times and accuracy of estimated exposures across census-defined populations. Image processing of 1744 samples on 16,384 CPUs required 1 h 11 min 23 s and radiation dose estimation based on DC frequencies required 32 sec. Processing of 40,000 samples at 10 exposures from five laboratories required 25 h and met IAEA criteria (dose estimates were within 0.5 Gy; median = 0.07). Geostatistically interpolated radiation exposure contours of simulated nuclear incidents were defined by samples exposed to clinically relevant exposure levels (1 and 2 Gy). Analysis of all exposed individuals with ADCI-HT required 0.6–7.4 days, depending on the population density of the simulation.


2020 ◽  
Author(s):  
Tyson McCall ◽  
Corinne Ransberger ◽  
Steve Hsiung

2020 ◽  
Author(s):  
Hyemin Han

AbstractBayesFactorFMRI is a tool developed with R and Python to allow neuroimaging researchers to conduct Bayesian second-level analysis and Bayesian meta-analysis of fMRI image data with multiprocessing. This tool expedites computationally intensive Bayesian fMRI analysis through multiprocessing. Its GUI allows researchers who are not experts in computer programming to feasibly perform Bayesian fMRI analysis. BayesFactorFMRI is available via Zenodo and GitHub for download. It would be widely reused by neuroimaging researchers who intend to analyse their fMRI data with Bayesian analysis with better sensitivity compared with classical analysis while improving performance by distributing analysis tasks into multiple processors.


2020 ◽  
Vol 13 (3) ◽  
pp. 370-380
Author(s):  
Shilpa Gupta ◽  
Gobind Lal Pahuja

Background: VLSI technology advancements have resulted the requirements of high computational power, which can be achieved by implementing multiple processors in parallel. These multiple processors have to communicate with their memory modules by using Interconnection Networks (IN). Multistage Interconnection Networks (MIN) are used as IN, as they provide efficient computing with low cost. Objective: the objective of the study is to introduce new reliable MIN named as a (Shuffle Exchange Gamma Interconnection Network Minus) SEGIN-Minus, which provide reliability and faulttolerance with less number of stages. Methods: MUX at input terminal and DEMUX at output terminal of SEGIN has been employed with reduction in one intermidiate stage. Fault tolerance has been introduced in the form of disjoint paths formed between each source-destnation node pair. Hence reliability has been improved. Results: Terminal, Broadcast and Network Reliability has been evaluated by using Reliability Block Diagrams for each source-destination node pair. The results have been shown, which depicts the hiher reliability values for newly proposed network. The cost analysis shows that new SEGINMinus is a cheaper network than SEGIN. Conclusion: SEGIN-Minus has better reliability and Fault-tolerance than priviously proposed SEGIN.


Author(s):  
Ridha Mehalaine ◽  
Fateh Boutekkouk

The objective of this work is to present a new heuristic for solving the problem of fault tolerance in real time distributed embedded systems. The proposed idea is to model the distributed embedded architecture inspiring from the rennin-angiotensin aldosterone (RAAS) biological system which plays a major role in the pathophysiology of the cardiovascular system, from the point of view of pressure regulation and vascular, cardiac and nephrological remodeling. The proposed heuristic deals with uncertain information on a set of periodic tasks that run on multiple processors and satisfies certain temporal and energetic constraints from which the scheduling and the distribution of these tasks on the different processors are performed. In order to respect the energy constraints, this article proposes the introduction of energy consumption at the dynamic task scheduling level by using the dynamic voltage scaling (DVS) technique. The authors have seen that the introduction of a detection/prevention mechanism against potential errors in the proposed algorithm is a must for good results.


Author(s):  
Shilpa Gupta ◽  
Gobind Lal Pahuja

Background: VLSI technology advancements have resulted the requirements of high computational power, which can be achieved by implementing multiple processors in parallel. These multiple processors have to communicate with their memory modules by using Interconnection Networks (IN). Multistage Interconnection networks (MIN) are used as IN, as they provide efficient computing with low cost. Objective: the objective of the study is to introduce new reliable Gamma MIN named as a Modified Gamma Interconnection Network (MGIN), which provide reliability and fault-tolerance with less number of stages of Switching element only. Method: Switching Element (SE) of bigger size i.e. 2×3/3×2 has been employed at input/output stages inspite of 1×3/3×1 sized SE at input/output stages with reduction in one intermidiate stage. Fault tolerance has been introduced in the form of disjoint paths formed between each source-destnation node pair. Hence reliability has been improved. Results: Terminal, Broadcast and Network Reliability has been evaluated by using Reliability Block Diagrams for each source-destination node pair. The results have been shown, which depicts the higher reliability values for newly proposed network. The cost analysis shows that new MGIN is a cheaper network than other Gamma variants. Conclusion: MGIN has better reliability and Fault-tolerance than priviously proposed Gamma MIN.


2019 ◽  
Vol 8 (06) ◽  
pp. 24697-24768
Author(s):  
Wafaa Ahmad Bazzi

Image processing contributes on many of the technological advancements today. It is widely used by the developing institutes as well as a source of important data. One of the main things that are considered through this process would be the length of time on dealing with the application of different routines, on these images. Thus, time, being a valuable criterion for the efficiency of the systems, demands a better way for how these images will be managed. With the given situation, the idea of integrating a number of computers to perform image manipulation is considered. This uses the idea of parallel computing.


Sign in / Sign up

Export Citation Format

Share Document