scholarly journals Investigating The Efficiency Of The VPR And COFFE Area Models In Predicting The Layout Area Of FPGA Lookup Tables

2021 ◽  
Author(s):  
Mousa Al-Qawasmi

A single tile in a mesh-based FPGA includes both the routing block and the logic block. The area estimate of a tile in an FPGA is used to determine the physical length of an FPGA’s routing segments. An estimate of the physical length of the routing segments is needed in order to accurately assess the performance of a proposed FPGA architecture. The VPR (Versatile Place and Route) and the COFFE (Circuit Optimization for FPGA Exploration) tools are widely used meshbased FPGA exploration environments. These tools map, place, and route benchmark circuits on FPGA architectures. Subsequently, based on area and delay measurements, the best architectural parameters of an FPGA are decided. The area models of the VPR and COFEE tools take only transistor size as input to estimate the area of a circuit. Realistically, the layout area of a circuit depends on both the transistor size and the number of metal layers that are available to route the circuit. This work measures the effect of the number of metal layers that are available for routing on FPGA layout area through a series of carefully laid out 4-LUTs (4-input Lookup Tables). Based on measured results, a correction factor for the COFFE area equation is determined. The correction factor is a function of both the transistor drive strength and the number of metal layers that are available for routing. Consequently, a new area estimation equation, that is based on the COFFE area model, is determined. The proposed area equation takes into consideration the effect of both the transistor drive strength and the number of metal layers that are available for routing on layout area. The area prediction error of the proposed area equation is significantly less than the area prediction errors of the VPR and COFFE area models.

2021 ◽  
Author(s):  
Mousa Al-Qawasmi

A single tile in a mesh-based FPGA includes both the routing block and the logic block. The area estimate of a tile in an FPGA is used to determine the physical length of an FPGA’s routing segments. An estimate of the physical length of the routing segments is needed in order to accurately assess the performance of a proposed FPGA architecture. The VPR (Versatile Place and Route) and the COFFE (Circuit Optimization for FPGA Exploration) tools are widely used meshbased FPGA exploration environments. These tools map, place, and route benchmark circuits on FPGA architectures. Subsequently, based on area and delay measurements, the best architectural parameters of an FPGA are decided. The area models of the VPR and COFEE tools take only transistor size as input to estimate the area of a circuit. Realistically, the layout area of a circuit depends on both the transistor size and the number of metal layers that are available to route the circuit. This work measures the effect of the number of metal layers that are available for routing on FPGA layout area through a series of carefully laid out 4-LUTs (4-input Lookup Tables). Based on measured results, a correction factor for the COFFE area equation is determined. The correction factor is a function of both the transistor drive strength and the number of metal layers that are available for routing. Consequently, a new area estimation equation, that is based on the COFFE area model, is determined. The proposed area equation takes into consideration the effect of both the transistor drive strength and the number of metal layers that are available for routing on layout area. The area prediction error of the proposed area equation is significantly less than the area prediction errors of the VPR and COFFE area models.


Author(s):  
Mauro Venturini ◽  
Nicola Puggina

The performance of gas turbines degrades over time and, as a consequence, a decrease in gas turbine performance parameters also occurs, so that they may fall below a given threshold value. Therefore, corrective maintenance actions are required to bring the system back to an acceptable operating condition. In today’s competitive market, the prognosis of the time evolution of system performance is also recommended, in such a manner as to take appropriate action before any serious malfunctioning has occurred and, as a consequence, to improve system reliability and availability. Successful prognostics should be as accurate as possible, because false alarms cause unnecessary maintenance and nonprofitable stops. For these reasons, a prognostic methodology, developed by the authors, is applied in this paper to assess its prediction reliability for several degradation scenarios typical of gas turbine performance deterioration. The methodology makes use of the Monte Carlo statistical method to provide, on the basis of the recordings of past behavior, a prediction of future availability, i.e., the probability that the considered machine or component can be found in the operational state at a given time in the future. The analyses carried out in this paper aim to assess the influence of the degradation scenario on methodology prediction reliability, as a function of a user-defined threshold and minimum value allowed for the parameter under consideration. A technique is also presented and discussed, in order to improve methodology prediction reliability by means a correction factor applied to the time points used for methodology calibration. The results presented in this paper show that, for all the considered degradation scenarios, the prediction error is lower than 4% (in most cases, it is even lower than 2%), if the availability is estimated for the next trend, while it is not higher than 12%, if the availability is estimated five trends ahead. The application of a proper correction factor allows the prediction errors after five trends to be reduced to approximately 5%.


2021 ◽  
Author(s):  
Nafiul Hyder

This work investigates the minimum layout area of multiplexers, a fundamental building block of Field-Programmable Gate Arrays (FPGAs). In particular, we investigate the minimum layout area of 4:1 multiplexers, which are the building blocks of 2-input Look-Up Tables (LUTs) and can be recursively used to build higher order LUTs and multiplexer-based routing switches. We observe that previous work routes all four data inputs of 4:1 multiplexers on a single metal layer resulting in a wiring-area-dominated layout. In this work, we explore the various transistor-level placement options for implementing the 4:1 multiplexers while routing multiplexer data inputs through multiple metal layers in order to reduce wiring area. Feasible placement options with their corresponding data input distributions are then routed using an automated maze router and the routing results are then further manually refined. Through this systematic approach, we identified three 4:1 multiplexer layouts that are smaller than the previously proposed layouts by 30% to 35%. In particular, two larger layouts of the three are only 33% to 45% larger than layout area predicted by the two widely used active area models from previous FPGA architectural studies, and the smallest of the three layouts is 1% to 11% larger than the layout area predicted by these models.


2021 ◽  
Author(s):  
Nafiul Hyder

This work investigates the minimum layout area of multiplexers, a fundamental building block of Field-Programmable Gate Arrays (FPGAs). In particular, we investigate the minimum layout area of 4:1 multiplexers, which are the building blocks of 2-input Look-Up Tables (LUTs) and can be recursively used to build higher order LUTs and multiplexer-based routing switches. We observe that previous work routes all four data inputs of 4:1 multiplexers on a single metal layer resulting in a wiring-area-dominated layout. In this work, we explore the various transistor-level placement options for implementing the 4:1 multiplexers while routing multiplexer data inputs through multiple metal layers in order to reduce wiring area. Feasible placement options with their corresponding data input distributions are then routed using an automated maze router and the routing results are then further manually refined. Through this systematic approach, we identified three 4:1 multiplexer layouts that are smaller than the previously proposed layouts by 30% to 35%. In particular, two larger layouts of the three are only 33% to 45% larger than layout area predicted by the two widely used active area models from previous FPGA architectural studies, and the smallest of the three layouts is 1% to 11% larger than the layout area predicted by these models.


Author(s):  
Mauro Venturini ◽  
Nicola Puggina

The performance of gas turbines degrades over time and, as a consequence, a decrease in gas turbine performance parameters also occurs, so that they may fall below a given threshold value. Therefore, corrective maintenance actions are required to bring the system back to an acceptable operating condition. In today’s competitive market, the prognosis of the time evolution of system performance is also recommended, in such a manner as to take appropriate action before any serious malfunctioning has occurred and, as a consequence, to improve system reliability and availability. Successful prognostics should be as accurate as possible, because false alarms cause unnecessary maintenance and non-profitable stops. For these reasons, a prognostic methodology, developed by the authors, is applied in this paper to assess its prediction reliability for several degradation scenarios typical of gas turbine performance deterioration. The methodology makes use of the Monte Carlo statistical method to provide, on the basis of the recordings of past behavior, a prediction of future availability, i.e. the probability that the considered machine or component can be found in the operational state at a given time in the future. The analyses carried out in this paper aim to assess the influence of the degradation scenario on methodology prediction reliability, as a function of a user-defined threshold and minimum value allowed for the parameter under consideration. A technique is also presented and discussed, in order to improve methodology prediction reliability by means a correction factor applied to the time points used for methodology calibration. The results presented in this paper show that, for all the considered degradation scenarios, the prediction error is lower than 4% (in most cases, it is even lower than 2%), if the availability is estimated for the next trend, while it is not higher than 12%, if the availability is estimated five trends ahead. The application of a proper correction factor allows the prediction errors after five trends to be reduced to approximately 5%.


2020 ◽  
Vol 43 ◽  
Author(s):  
Kellen Mrkva ◽  
Luca Cian ◽  
Leaf Van Boven

Abstract Gilead et al. present a rich account of abstraction. Though the account describes several elements which influence mental representation, it is worth also delineating how feelings, such as fluency and emotion, influence mental simulation. Additionally, though past experience can sometimes make simulations more accurate and worthwhile (as Gilead et al. suggest), many systematic prediction errors persist despite substantial experience.


Author(s):  
J. T. Woodward ◽  
J. A. N. Zasadzinski

The Scanning Tunneling Microscope (STM) offers exciting new ways of imaging surfaces of biological or organic materials with resolution to the sub-molecular scale. Rigid, conductive surfaces can readily be imaged with the STM with atomic resolution. Unfortunately, organic surfaces are neither sufficiently conductive or rigid enough to be examined directly with the STM. At present, nonconductive surfaces can be examined in two ways: 1) Using the AFM, which measures the deflection of a weak spring as it is dragged across the surface, or 2) coating or replicating non-conductive surfaces with metal layers so as to make them conductive, then imaging with the STM. However, we have found that the conventional freeze-fracture technique, while extremely useful for imaging bulk organic materials with STM, must be modified considerably for optimal use in the STM.


Author(s):  
Roberto Limongi ◽  
Angélica M. Silva

Abstract. The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production – where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.


2006 ◽  
Author(s):  
Daniel Lafond ◽  
Yves Lacouture ◽  
Guy Mineau

Sign in / Sign up

Export Citation Format

Share Document