VLSI Design Course with Commercial EDA Tools to Meet Industry Demand – From Logic Synthesis to Physical Design

Author(s):  
Siu Hong Loh ◽  
Ing Ming Tan ◽  
Jia Jia Sim
2013 ◽  
Author(s):  
Παύλος Ματθαιάκης

The number of transistors per chip increases by 58% per year. At the same time, the designer productivity increases by 21% per year. Thus, an increasing number of design and verification engineers is required to tape-out a chip in the same amount of time. In order to close the design productivity gap the abstraction layer should be raised to boost the design productivity more than the above percentage. For instance, the productivity was increased by 10x in 80s when the state of the art design practice changed from stick diagrams to gate level design. Later on, during the 90s, the productivity increased further by 10x by moving to the RTL level design. Behavioral modeling, lately extended the productivity further by 5x. In behavioral modeling, the control is decoupled from the datapath. It is separately described by HDL structures which correspond to monolithic FSMs, increasing thus the abstraction layer from RTL to FSMs. The underlying EDA tools extract, synthesize and verify monolithic FSMs with algorithms performing at this higher level of abstraction. For instance, state minimization which was originally handled by the engineers themselves, is automatically performed by the EDA tools increasing the quality of results, the design time and the verifiability. Although a monolithic FSM is an adequately powerful formalism to describe sequential circuits, it fails to model concurrency without state explosion. Interacting FSM models have so far lacked the formal rigor for expressing the synchronizing interactions between different FSMs. The event based, PTnet model is able to model both concurrency and choice within the same model, however lacks a polynomial time flow to implementation, as current methods of exposing the event state space require a potentially exponential number of states. In this work, a novel formalism for interacting FSMs is introduced i.e Multiple, Synchronized FSMs (MSFSMs), a compact Interacting FSMs model, potentially implementable using any existing monolithic FSM implementation method. MSFSMs efficiently describe concurrent control systems whilst also acting as an intermediate representation for synthesizing existing specifications described as PTNets with FSMbased flows or for verifying concurrency related properties for systems described as a FSMs with PTNet-based algorithms. PTNet to MSFSMs and MSFSMs to interacting FSMs transformation algorithms are proved in this work to be tractable. Thus, efficient PTNet synthesis and interacting FSMs verification flows are introduced which exploit MSFSMs and which do not exhibit state explosion. Furthermore, novel efficient algorithms introduced at the MSFSM level optimize the control specifications by exploiting the inter-FSM communication. Experimental results indicate that PTNets can indeed be transformed to synthesizable FSMs through transformation to MSFSMs without exhibiting state explosion. A large set of concurrent specifications was transformed to MSFSMs in less than one second each, whereas tools generating the full state space needed days of execution time just to generate specification’s state graph. The logic synthesis framework developed in this work, Expose, approaches the quality of results of logic synthesis tools which generate the exponentially large state space of the specifications, whilst approaching the execution time of the direct-mapping methodologies. Concurrent specifications which could only be implemented through direct mapping, as the execution time for full state space exploration is prohibitive, can now be synthesized using Expose. Our results also show that the MSFSM-based heuristic optimization algorithms drastically and predictably improve the implementation metrics of area and performance as they benefit from the confluence between MSFSMs and state space. By assembling a synthesis flow out of heuristic optimizations, an overall area and performance gain of 80% and 35% respectively was obtained.


1992 ◽  
Vol 02 (01) ◽  
pp. 1-26 ◽  
Author(s):  
LECH JÓŹWIAK

VLSI circuit design is a “trial and error” process that consists of solving a number of design problems. An optimal state assignment is one of the most important problems in the logic synthesis for sequential machines and it consists of choosing a binary representation for symbolic internal states of a sequential machine, so that the resulting logic is optimal for a given objective. This problem belongs to the class of most complex computational problems in VLSI design – it is NP hard. In a strict sense, it has never been solved, except for exhaustive search, which is impossible for large machines even with a computer. A structural heuristic approach uses specific knowledge about the structure of a given problem in order to reduce the search space to a manageable size and to maintain high quality solutions at the same time. Using the state assignment problem as an example, we determined the importance of the structural heuristic approach in CAD for VLSI and we showed how to search for suitable heuristics. We discussed a new heuristic method for state assignments concentrating on heuristic aspects such as: the solution space, the generation procedure and its operators, the evaluation functions etc. We provided some experimental results to show that the new method is very efficient. The structural heuristic search can be highly efficient. Its efficiency is limited more by the capacity of the human brain to think heuristically than by the complexity of the problem itself.


2010 ◽  
Vol 09 (03) ◽  
pp. 201-214 ◽  
Author(s):  
KUNAL DAS ◽  
DEBASHIS DE

Quantum dot cellular automaton (QCA) is an emerging technology in the field of nanotechnology. Reversible logic is emerging as a promising computing paradigm with applications in low-power quantum computing and QCA in the field of very large scale integration (VLSI) design. In this paper, we worked on conservative logic gate (CLG) and reversible logic gate (RLG). We examined that RLG and CLG are two classes of logic family intersecting each other. The intersection of RLG and CLG is parity preserving reversible (PPR) or conservative reversible logic gate (CRLG). We proposed in this paper, three algorithms to find different k × k RLG as well as CLG. Here, we demonstrate only the most promising two proposed gates of different categories. We compared the results with that of the previous Fredkin gate. The result shows that logic synthesis using above two gates will be a promising step towards the low-power QCA design era. We have shown a parity preserving approach to design all possible CLG. We also discuss a coupled Majority–minority-Voter (MmV) in a single nanostructure, dual outputs are driven simultaneously. This MmV gate is used for implementing n variables symmetric functions, testing the conservative gates as we explained that parity must be preserved if Majority and Minority output are same as input as well as output of CLG.


VLSI Design ◽  
1997 ◽  
Vol 5 (2) ◽  
pp. 111-124 ◽  
Author(s):  
Massoud Pedram ◽  
Narasimha Bhat ◽  
Ernest S. Kuh

Due to the significant contribution of interconnect to the area and speed of today's circuits and the technological trend toward smaller and faster gates which will make the effects of interconnect even more substantial, interconnect optimization must be performed during all phases of the design. The premise of this paper is that by increasing the interaction between logic synthesis and physical design, circuits with smaller area and interconnection length, and improved performance and routability can be obtained compared to when the two processes are done separately. In particular, this paper describes an integrated approach to technology mapping and physical design which finds solutions in both domains of design representation simultaneously and interactively. The two processes are performed in lockstep: technology mapping takes advantage of detailed information about the interconnect delays and the layout cost of various optimization alternatives; placement itself is guided by the evolving logic structure and accurate path-based delay traces. Using these techniques, circuits with smaller area and higher performance have been synthesized.


1996 ◽  
Vol 07 (02) ◽  
pp. 223-248 ◽  
Author(s):  
GARY YEAP ◽  
ANDREAS WILD

The paper is a survey of the current status of research and practices in various disciplines of low-power VLSI developments. After briefly discussing the rationale of the contemporary focus on low-power design, it presents the metrics and techniques used to assess the merits of the various solutions proposed for improved energy efficiency. The requirements to be fulfilled by process technologies and device structures are reviewed as well as several promising circuit design styles and ad hoc design techniques. The impact of the design automation tools is analyzed with a special emphasis on physical design and logic synthesis. A review of various architectural trade-offs, including power management, parallelism and pipelining, synchronous versus asynchronous architectures and dataflow transformations are covered, followed by a brief discussion of the impact of the system definition, software and algorithms to the overall power efficiency. Emerging semiconductor technologies and device structures are discussed and the paper is concluded with the trends and research topics for the future.


Author(s):  
DAVID RUBY ◽  
DENNIS KIBLER

One goal of Artificial Intelligence is to develop and understand computational mechanisms for solving difficult real-world problems. Unfortunately, domains traditionally used in general problem-solving research lack important characteristics of real-world domains, making it difficult to apply the techniques developed. Most classic AI domains require satisfying a set of Boolean constraints. Real-world problems require finding a solution that meets a set of Boolean constraints and performs well on a set of real-valued constraints. In addition, most classic domains are static while domains from the real world change. In this paper we demonstrate that SteppingStone, a general learning problem solver, is capable of solving problems with these characteristics. SteppingStone heuristically decomposes a problem into simpler subproblems, and then learns to deal with the interactions that arise between the subproblems. In lieu of an agreed upon metric for problem difficulty, we choose significant problems that are difficult for both people and programs as good candidates for evaluating progress. Consequently we adopt the domain of logic synthesis from VLSI design to demonstrate SteppingStone’s capabilities.


Sign in / Sign up

Export Citation Format

Share Document