scholarly journals Verification of Dual Port RAM using System Verilog and UVM

Author(s):  
Geethashree .

Verification process place a prominent role in the field of SoC and ASIC design. Several verification methodologies are there apart from those Universal Verification Methodology (UVM) is advanced and it is widely used by the industries due to its special features. UVM provides reusable and well-structured verification components by using System Verilog class library. In this work, Dual Port RAM is considered as Design Under Test (DUT). System Verilog and UVM verification environments are developed to verify the DUT. Assertion and cover group coverage are set up with a goal of achieving 100% from both the environments.

Author(s):  
Sridevi Chitti ◽  
P. Chandrasekhar ◽  
M. Asharani

This paper discusses a standard flow on how an automated test bench environment which is randomized with constraints can verify a SOC efficiently for its functionality and coverage. Today, in the time of multimillion gate ASICs, reusable intellectual property (IP), and system-on-a-chip (SoC) designs, verification consumes about 70 % of the design effort. Automation means a machine completes a task autonomously, quicker and with predictable results. Automation requires standard processes with well-defined inputs and outputs. By using this efficient methodology it is possible to provide a general purpose automation solution for verification, given today’s technology. Tools automating various portions of the verification process are being introduced. Here, we have Communication based SOC The content of the paper discusses about the methodology used to verify such a SOC-based environment. Cadence Efficient Verification Methodology libraries are explored for the solution of this problem. We can take this as a state of art approach in verifying SOC environments. The goal of this paper is to emphasize the unique testbench for different SOC using Efficient Verification Constructs implemented in system verilog for SOC verification.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Sana Shuja ◽  
Sudarshan K. Srinivasan ◽  
Shaista Jabeen ◽  
Dharmakeerthi Nawarathna

Pacemakers are safety-critical devices whose faulty behaviors can cause harm or even death. Often these faulty behaviors are caused due to bugs in programs used for digital control of pacemakers. We present a formal verification methodology that can be used to check the correctness of object code programs that implement the safety-critical control functions of DDD mode pacemakers. Our methodology is based on the theory of Well-Founded Equivalence Bisimulation (WEB) refinement, where both formal specifications and implementation are treated as transition systems. We develop a simple and general formal specification for DDD mode pacemakers. We also develop correctness proof obligations that can be applied to validate object code programs used for pacemaker control. Using our methodology, we were able to verify a control program with millions of transitions against the simple specification with only 10 transitions. Our method also found several bugs during the verification process.


2014 ◽  
Vol 2014 (1) ◽  
pp. 000668-000672 ◽  
Author(s):  
James Kupferschmidt ◽  
Michael Girardi ◽  
Brent Duncan ◽  
Daren Whitlock

Low Temperature Cofired Ceramic (LTCC) technology can be applied in numerous functions due to a wide variety of benefits, particularly related to flexibility of applications. Controlling the LTCC shrinkage tolerances in the x, y, and z dimensions is critical during manufacturing and avoids an assortment of down-stream issues that will affect yields. All manufacturers of LTCC tape provide a Certificate of Analysis (COA), which contains the results of the manufacturer's shrinkage testing so production variation can be established from lot to lot. Data from this COA is generally used as a starting point in the shrinkage predictions for manufacturing purposes; however, verification of this data must be performed prior to initiating an LTCC build. This paper investigates validation of one manufacturer's COA data and explains how shrinkage differences can occur between the COA data and the data collected during the verification process. The tracking of this data is also presented as a means to ensure proper controls are in place, and the type and style of lamination and cofiring are shown to be significant contributors to these differences. Data will then be presented in association with characterization prior to and after relocation of LTCC fabrication equipment. Additionally, the COA data can be incorporated into shrinkage estimates that will be utilized to set up process parameters, tolerances, and a control plan.


Author(s):  
Janos Bodi ◽  
Alexander Ponomarev ◽  
Konstantin Mikityuk

Abstract In this paper, a reactor core mechanical analysis method is introduced to provide a tool to calculate the reactivity effect of the fuel subassembly displacement in the reactor core which is an important problem for reactor types such as the Sodium-cooled Fast Reactor (SFR). The presented method relies on the following two main steps: 1) Core deformation calculation through a Computer-Aided Design (CAD) based finite element solver and 2) Static neutronic simulation on the original undeformed and deformed core models with a Mode Carlo code to quantify the reactivity effect. The technique makes it possible to accurately simulate the deformed geometry of the reactor core and to use this deformed shape model directly in the neutronic analysis. The paper includes the verification process which was conducted to compare the accuracy of the finite element solver to the theoretical solutions regarding the deformation of a hexagonal subassembly. Moreover, the neutronic calculation accuracy has been demonstrated. Following this, a validation work has been performed on the Phenix Sodium-cooled Fast Reactor based on the data obtained from previous, end-of-life test, experimental set-up. This procedure proved the accuracy of the presented methodology for both the verification and the validation cases, giving the capability to assess the reactivity effect of a non-uniform core deformation in an SFR.


Author(s):  
Rolf Baarholm ◽  
Ivar Fylling ◽  
Carl Trygve Stansberg ◽  
Ola Oritsland

Model tests for global design verification of floating production systems in depths beyond 1000m–1500m cannot be made directly at reasonable scales. Truncation of mooring line and riser models, software calibration, as well as extrapolation and transformation to full depth and full scale, are required. Here, the first two of the above three items are addressed. The paper emphasizes the important matters to be taken into account. The choice of proper procedures for the set-up and the interpretation, and consistent and well documented methods, are essential. A case study with a deep-water semisubmersible is presented. In general, good agreement between model test results and analytical results from time-domain coupled analysis of the floater system responses is found.


Author(s):  
Tatiana Kelemenová ◽  
Ivana Koláriková ◽  
Ondrej Benedik

Urgency of the research. There are several types of displacement sensor available on market. Displacement sensor investigated in this work is based on optical encoder principle. Condition of sensors changes with using of it. Periodically it is necessary to check its condition, if it is within the declared limits. Target setting. Displacement linear sensor is mounted in comparator stand for the verifying of its condition using the set of length gauge blocks. Length gauge blocks allows to set up the etalon of length with various dimension in interval from 0.5mm to 100 mm. The systematic errors of used set of length gauge blocks of grade “0” are very small in comparing with measured dimension and measured deviations. Actual scientific researches and issues analysis. It is necessary to check actual status of sensor. It means that verification process will obtain the information about maximum permissible error and information about reliability. Uninvestigated parts of general matters defining. The main problem was to identify condition of sensor. The question of the probability distribution of measured values and uncertainty balance are uninvestigated, because the next research will be focused to this are. The research objective. The aim is to obtain maximum permissible error of explored sensor. On the base of deviation of measurement made on length gauges could be expressed. The optimal number of measurement is problem to know, because the low number will cause big uncertainty of measurement and large number of measurement will cause the large cost of measurement. The statement of basic materials. Gauge length blocks have been used for verification of investigated sensor. The set of gauge length blocks of grade “0” has been used, which are preferred mainly for calibration or verification purposes. Maximum permissible error has been estimated as math model for next using. Also optimal number of measurement is identified from analysis of standard deviation of measurements made one hundred times on selected dimensions. Conclusions. The investigated sensor meets the maximum permissible error limits set by the manufacturer with a large margin, and so the maximum permissible error limits have been tightened so that the measurement uncertainty is better. The sensor can be used in dimensional measurement applications, even in industrial conditions.


ASIC Implementation of AMBA APB convention with confirmation has been proposed right now. The structure presents Advanced Peripheral Bus Protocol (APB) in last part. To interface the peripherals, low data move capacity and low execution transport of APB is used. Henceforth, an altered ASIC plan with explicit less highlights, with better planning, low force necessity and less zone overhead, has been proposed. This plan is explicitly adept for advanced frameworks which have sequential transport interface necessity for on board correspondence. Additionally, the Firm IP centre of Master Controller has been intended for ASIC, which makes the structure exceptionally versatile on any ASIC chips or SOC plans. The whole custom ASIC execution of proposed configuration has been done in Synopsys Tool chain with 32nm standard cell library and this structure is verified utilizing Universal Verification Methodology (UVM).


Author(s):  
T. G. Naymik

Three techniques were incorporated for drying clay-rich specimens: air-drying, freeze-drying and critical point drying. In air-drying, the specimens were set out for several days to dry or were placed in an oven (80°F) for several hours. The freeze-dried specimens were frozen by immersion in liquid nitrogen or in isopentane at near liquid nitrogen temperature and then were immediately placed in the freeze-dry vacuum chamber. The critical point specimens were molded in agar immediately after sampling. When the agar had set up the dehydration series, water-alcohol-amyl acetate-CO2 was carried out. The objectives were to compare the fabric plasmas (clays and precipitates), fabricskeletons (quartz grains) and the relationship between them for each drying technique. The three drying methods are not only applicable to the study of treated soils, but can be incorporated into all SEM clay soil studies.


Author(s):  
T. Gulik-Krzywicki ◽  
M.J. Costello

Freeze-etching electron microscopy is currently one of the best methods for studying molecular organization of biological materials. Its application, however, is still limited by our imprecise knowledge about the perturbations of the original organization which may occur during quenching and fracturing of the samples and during the replication of fractured surfaces. Although it is well known that the preservation of the molecular organization of biological materials is critically dependent on the rate of freezing of the samples, little information is presently available concerning the nature and the extent of freezing-rate dependent perturbations of the original organizations. In order to obtain this information, we have developed a method based on the comparison of x-ray diffraction patterns of samples before and after freezing, prior to fracturing and replication.Our experimental set-up is shown in Fig. 1. The sample to be quenched is placed on its holder which is then mounted on a small metal holder (O) fixed on a glass capillary (p), whose position is controlled by a micromanipulator.


Sign in / Sign up

Export Citation Format

Share Document