scholarly journals An Efficient Evolutionary Task Scheduling/Binding Framework for Reconfigurable Systems

2016 ◽  
Vol 2016 ◽  
pp. 1-24
Author(s):  
A. Al-Wattar ◽  
S. Areibi ◽  
G. Grewal

Several embedded application domains for reconfigurable systems tend to combine frequent changes with high performance demands of their workloads such as image processing, wearable computing, and network processors. Time multiplexing of reconfigurable hardware resources raises a number of new issues, ranging from run-time systems to complex programming models that usually form a reconfigurable operating system (ROS). In this paper, an efficient ROS framework that aids the designer from the early design stages all the way to the actual hardware implementation is proposed and implemented. An efficient reconfigurable platform is implemented along with novel placement/scheduling algorithms. The proposed algorithms tend to reuse hardware tasks to reduce reconfiguration overhead, migrate tasks between software and hardware to efficiently utilize resources, and reduce computation time. A supporting framework for efficient mapping of execution units to task graphs in a run-time reconfigurable system is also designed. The framework utilizes an Island Based Genetic Algorithm flow that optimizes several objectives including performance, area, and power consumption. The proposed Island Based GA framework achieves on average 55.2% improvement over a single-GA implementation and an 80.7% improvement over a baseline random allocation and binding approach.

Author(s):  
Ahmed Al-Wattar ◽  
Shawki Areibi ◽  
Gary Grewal

<p>Several embedded application domains for reconfigurable systems tend to combine <br />frequent changes with high performance demands of their workloads such as image processing, wearable computing and<br />network processors.  Time multiplexing of reconfigurable hardware resources raises a number of new issues, ranging <br />from run-time systems to complex programming models that usually form a Reconfigurable<br />hardware Operating System (ROS).  The Operating System performs online task scheduling and handles resource management.<br />There are many challenges in adaptive computing and dynamic reconfigurable systems. One of the major understudied challenges<br />is estimating the required resources in terms of soft cores, Programmable Reconfigurable Regions (PRRs), <br />the appropriate communication infrastructure, and to predict a near optimal layout and floor-plan of the reconfigurable logic fabric. <br />Some of these issues are specific to the application being designed, while others are more general and relate to the underlying run-time environment.<br />Static resource allocation for Run-Time Reconfiguration (RTR) often leads to inferior and unacceptable results. <br />In this paper, we present a novel adaptive and dynamic methodology, based on a Machine Learning approach, for predicting and<br />estimating the necessary resources for an application based on past historical information.<br />An important feature of the proposed methodology is that the system is able to learn and generalize and, therefore, is expected to improve <br />its accuracy over time.  The goal of the entire process is to extract useful hidden knowledge from the data. This knowledge is the prediction <br />and estimation of the necessary resources for an unknown or not previously seen application.<br /><br /></p>


Author(s):  
Jia Xu

In most embedded, real-time applications, processes need to satisfy various important constraints and dependencies, such as release times, offsets, precedence relations, and exclusion relations. Embedded, real-time systems with high assurance requirements often must execute many different types of processes with such constraints and dependencies. Some of the processes may be periodic and some of them may be asynchronous. Some of the processes may have hard deadlines and some of them may have soft deadlines. For some of the processes, especially the hard real-time processes, complete knowledge about their characteristics can and must be acquired before run-time. For other processes, prior knowledge of their worst case computation time and their data requirements may not be available. It is important for many embedded real-time systems to be able to simultaneously satisfy as many important constraints and dependencies as possible for as many different types of processes as possible. In this paper, we discuss what types of important constraints and dependencies can be satisfied among what types of processes. We also present a method which guarantees that, for every process, no matter whether it is periodic or asynchronous, and no matter whether it has a hard deadline or a soft deadline, as long as the characteristics of that process are known before run-time, then that process will be guaranteed to be completed before predetermined time limits, while simultaneously satisfying many important constraints and dependencies with other processes.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1074
Author(s):  
Raul Rotar ◽  
Sorin Liviu Jurj ◽  
Flavius Opritoiu ◽  
Mircea Vladutiu

This paper presents a mathematical approach for determining the reliability of solar tracking systems based on three fault coverage-aware metrics which use system error data from hardware, software as well as in-circuit testing (ICT) techniques, to calculate a solar test factor (STF). Using Euler’s named constant, the solar reliability factor (SRF) is computed to define the robustness and availability of modern, high-performance solar tracking systems. The experimental cases which were run in the Mathcad software suite and the Python programming environment show that the fault coverage-aware metrics greatly change the test and reliability factor curve of solar tracking systems, achieving significantly reduced calculation steps and computation time.


Author(s):  
Vinay Sriram ◽  
David Kearney

High speed infrared (IR) scene simulation is used extensively in defense and homeland security to test sensitivity of IR cameras and accuracy of IR threat detection and tracking algorithms used commonly in IR missile approach warning systems (MAWS). A typical MAWS requires an input scene rate of over 100 scenes/second. Infrared scene simulations typically take 32 minutes to simulate a single IR scene that accounts for effects of atmospheric turbulence, refraction, optical blurring and charge-coupled device (CCD) camera electronic noise on a Pentium 4 (2.8GHz) dual core processor [7]. Thus, in IR scene simulation, the processing power of modern computers is a limiting factor. In this paper we report our research to accelerate IR scene simulation using high performance reconfigurable computing. We constructed a multi Field Programmable Gate Array (FPGA) hardware acceleration platform and accelerated a key computationally intensive IR algorithm over the hardware acceleration platform. We were successful in reducing the computation time of IR scene simulation by over 36%. This research acts as a unique case study for accelerating large scale defense simulations using a high performance multi-FPGA reconfigurable computer.


2018 ◽  
Vol 7 (12) ◽  
pp. 467 ◽  
Author(s):  
Mengyu Ma ◽  
Ye Wu ◽  
Wenze Luo ◽  
Luo Chen ◽  
Jun Li ◽  
...  

Buffer analysis, a fundamental function in a geographic information system (GIS), identifies areas by the surrounding geographic features within a given distance. Real-time buffer analysis for large-scale spatial data remains a challenging problem since the computational scales of conventional data-oriented methods expand rapidly with increasing data volume. In this paper, we introduce HiBuffer, a visualization-oriented model for real-time buffer analysis. An efficient buffer generation method is proposed which introduces spatial indexes and a corresponding query strategy. Buffer results are organized into a tile-pyramid structure to enable stepless zooming. Moreover, a fully optimized hybrid parallel processing architecture is proposed for the real-time buffer analysis of large-scale spatial data. Experiments using real-world datasets show that our approach can reduce computation time by up to several orders of magnitude while preserving superior visualization effects. Additional experiments were conducted to analyze the influence of spatial data density, buffer radius, and request rate on HiBuffer performance, and the results demonstrate the adaptability and stability of HiBuffer. The parallel scalability of HiBuffer was also tested, showing that HiBuffer achieves high performance of parallel acceleration. Experimental results verify that HiBuffer is capable of handling 10-million-scale data.


2004 ◽  
Vol 1 (1) ◽  
pp. 66-68
Author(s):  
Minehisa Imazato ◽  
Hayato Hommura ◽  
Go Sudo ◽  
Kenji Katori ◽  
Koichi Tanaka

The micro-diffusion layer for DMFC consists of carbon and hydrophobic resin used as a binder. The function of the micro diffusion layer on carbon paper is not only to support the catalyst layer to conduct electricity, but also to maintain a stable mixture of gas and liquid. The amount of hydrophobic resin binder in the micro diffusion layer is therefore a critical parameter. The amount of hydrophobic resin binder is normally less than 50wt%, but we investigated this parameter and found that there is another high performance area around 80wt%.


Sign in / Sign up

Export Citation Format

Share Document