scholarly journals Advances in VLSI testing at MultiGb per second rates

2005 ◽  
Vol 2 (1) ◽  
pp. 43-55
Author(s):  
Dragan Topisirovic

Today's high performance manufacturing of digital systems requires VLSI testing at speeds of multigigabits per second (multiGbps). Testing at Gbps needs high transfer rates among channels and functional units, and requires readdressing of data format and communication within a serial mode. This implies that a physical phenomena-jitter, is becoming very essential to tester operation. This establishes functional and design shift, which in turn dictates a corresponding shift in test and DFT (Design for Testability) methods. We, here, review various approaches and discuss the tradeoffs in testing actual devices. For industry, volume-production stage and testing of multigigahertz have economic challenges. A particular solution based on the conventional ATE (Automated Test Equipment) resources, that will be discussed, allows for accurate testing of ICs with many channels and this systems can test ICs at 2.5 Gbps over 144 cannels, with extensions planned that will have test rates exceeding 5 Gbps. Yield improvement requires understanding failures and identifying potential sources of yield loss. This text focuses on diagnosing of random logic circuits and classifying faults. An interesting scan-based diagnosis flow, which leverages the ATPG (Automatic Test Pattern Generator) patterns originally generated for fault coverage, will be described. This flow shows an adequate link between the design automation tools and the testers, and a correlation between the ATPG patterns and the tester failure reports.

Author(s):  
Ranganathan Gopinath ◽  
Ravikumar Venkat Krishnan ◽  
Lua Winson ◽  
Phoa Angeline ◽  
Jin Jie

Abstract Dynamic Photon Emission Microscopy (D-PEM) is an established technique for isolating short and open failures, where photons emitted by transistors are collected by sensitive infra-red detectors while the device under test is electrically exercised with automated test equipment (ATE). Common tests, such as scan, use patterns that are generated through Automatic Test Pattern Generator (ATPG) in compressed mode. When these patterns are looped for D-PEM, it results in indeterministic states within cells during the load or unload sequences, making interpretation of emission challenging. Moreover, photons are emitted with lower probability and lesser energies for smaller technology nodes such as the FinFET. In this paper, we will discuss executing scan tests in manners that can be used to bring out emission which did not show up in conventional test loops.


2017 ◽  
Vol 2017 ◽  
pp. 1-4
Author(s):  
Vojtech Vigner ◽  
Jaroslav Roztocil

Comparison of high-performance time scales generated by atomic clocks in laboratories of time and frequency metrology is usually performed by means of the Common View method. Laboratories are equipped with specialized GNSS receivers which measure the difference between a local time scale and a time scale of the selected satellite. Every receiver generates log files in CGGTTS data format to record measured differences. In order to calculate time differences recorded by two receivers, it is necessary to obtain these logs from both receivers and process them. This paper deals with automation and speeding up of these processes.


2015 ◽  
Vol 21 (6) ◽  
pp. 630-648 ◽  
Author(s):  
Sunil Kumar Tiwari ◽  
Sarang Pande ◽  
Sanat Agrawal ◽  
Santosh M. Bobade

Purpose – The purpose of this paper is to propose and evaluate the selection of materials for the selective laser sintering (SLS) process, which is used for low-volume production in the engineering (e.g. light weight machines, architectural modelling, high performance application, manufacturing of fuel cell, etc.), medical and many others (e.g. art and hobbies, etc.) with a keen focus on meeting customer requirements. Design/methodology/approach – The work starts with understanding the optimal process parameters, an appropriate consolidation mechanism to control microstructure, and selection of appropriate materials satisfying the property requirement for specific application area that leads to optimization of materials. Findings – Fabricating the parts using optimal process parameters, appropriate consolidation mechanism and selecting the appropriate material considering the property requirement of applications can improve part characteristics, increase acceptability, sustainability, life cycle and reliability of the SLS-fabricated parts. Originality/value – The newly proposed material selection system based on properties requirement of applications has been proven, especially in cases where non-experts or student need to select SLS process materials according to the property requirement of applications. The selection of materials based on property requirement of application may be used by practitioners from not only the engineering field, medical field and many others like art and hobbies but also academics who wish to select materials of SLS process for different applications.


Author(s):  
C. Mureșan ◽  
◽  
G. Harja

The performance and efficiency of internal combustion (IC) engines can be greatly improved by using a high-performance cooling system. This can be achieved by implementing robust control strategies and, also by building the cooling system with high-performance elements. The mechanical execution elements can be replaced with electrically controllable elements such as the pump and the thermostat valve. This will have a positive influence on the degree of controllability of the system. In order to develop high-performance control algorithms, it is necessary to have a model that best reflects the behaviors of the physical system. Thus, this paper presents a mathematical modeling approach for the cooling system using the principles of heat exchangers and the physical phenomena present in them.


2019 ◽  
pp. 254-277 ◽  
Author(s):  
Ying Zhang ◽  
Chaopeng Li ◽  
Na Chen ◽  
Shaowen Liu ◽  
Liming Du ◽  
...  

Since large amount of geospatial data are produced by various sources, geospatial data integration is difficult because of the shortage of semantics. Despite standardised data format and data access protocols, such as Web Feature Service (WFS), can enable end-users with access to heterogeneous data stored in different formats from various sources, it is still time-consuming and ineffective due to the lack of semantics. To solve this problem, a prototype to implement the geospatial data integration is proposed by addressing the following four problems, i.e., geospatial data retrieving, modeling, linking and integrating. We mainly adopt four kinds of geospatial data sources to evaluate the performance of the proposed approach. The experimental results illustrate that the proposed linking method can get high performance in generating the matched candidate record pairs in terms of Reduction Ratio(RR), Pairs Completeness(PC), Pairs Quality(PQ) and F-score. The integrating results denote that each data source can get much Complementary Completeness(CC) and Increased Completeness(IC).


Author(s):  
Ying Zhang ◽  
Chaopeng Li ◽  
Na Chen ◽  
Shaowen Liu ◽  
Liming Du ◽  
...  

Since large amount of geospatial data are produced by various sources, geospatial data integration is difficult because of the shortage of semantics. Despite standardised data format and data access protocols, such as Web Feature Service (WFS), can enable end-users with access to heterogeneous data stored in different formats from various sources, it is still time-consuming and ineffective due to the lack of semantics. To solve this problem, a prototype to implement the geospatial data integration is proposed by addressing the following four problems, i.e., geospatial data retrieving, modeling, linking and integrating. We mainly adopt four kinds of geospatial data sources to evaluate the performance of the proposed approach. The experimental results illustrate that the proposed linking method can get high performance in generating the matched candidate record pairs in terms of Reduction Ratio(RR), Pairs Completeness(PC), Pairs Quality(PQ) and F-score. The integrating results denote that each data source can get much Complementary Completeness(CC) and Increased Completeness(IC).


Micromachines ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 205
Author(s):  
Dan Xue ◽  
Jiachou Wang ◽  
Xinxin Li

In this paper, we present a novel thermoresistive gas flow sensor with a high-yield and low-cost volume production by using front-side microfabricated technology. To best improve the thermal resistance, a micro-air-trench between the heater and the thermistors was opened to minimize the heat loss from the heater to the silicon substrate. Two types of gas flow sensors were designed with the optimal thermal-insulation configuration and fabricated by a single-wafer-based single-side process in (111) wafers, where the type A sensor has two thermistors while the type B sensor has four. Chip dimensions of both sensors are as small as 0.7 mm × 0.7 mm and the sensors achieve a short response time of 1.5 ms. Furthermore, without using any amplification, the normalized sensitivity of type A and type B sensors is 1.9 mV/(SLM)/mW and 3.9 mV/(SLM)/mW for nitrogen gas flow and the minimum detectable flow rate is estimated at about 0.53 and 0.26 standard cubic centimeter per minute (sccm), respectively.


2019 ◽  
Vol 214 ◽  
pp. 04059
Author(s):  
Marc Paterno ◽  
Jim Kowalkowski ◽  
Saba Sehrish

In their recent measurement of the neutrino oscillation parameters, NOvA uses a sample of approximately 25 million reconstructed spills to search for electron-neutrino appearance events. These events are stored in an n-tuple format, in 250 thousand ROOT files. File sizes range from a few hundred KiB to a few MiB; the full dataset is approximately 1.4 TiB. These millions of events are reduced to a few tens of events by the application of strict event selection criteria, and then summarized by a handful of numbers each, which are used in the extraction of the neutrino oscillation parameters. The NOvA event selection code is currently a serial C++ program that reads these n-tuples. The current table data format and organization and the selection/ reduction processing involved provides us with an opportunity to explore alternate approaches to represent the data and implement the processing. We represent our n-tuple data in HDF5 format that is optimized for the HPC environment and which allows us to use the machine’s high-performance parallel I/O capabilities. We use MPI, numpy and h5py to implement our approach and compare the performance with the existing approach. We study the performance implications of using thousands of small files of different sizes as compared with one large file using HPC resources. This work has been done as part of the SciDAC project, “HEP analytics on HPC” in collaboration with the ASCR teams at ANL and LBNL.


2017 ◽  
Vol 33 (15) ◽  
pp. 2251-2257 ◽  
Author(s):  
Xiuwen Zheng ◽  
Stephanie M Gogarten ◽  
Michael Lawrence ◽  
Adrienne Stilp ◽  
Matthew P Conomos ◽  
...  

Author(s):  
Tuyen Truong ◽  
Bernard Pottier ◽  
Hiep Huynh

Long-range radio transmissions open new sensor application fields, in particular for environment monitoring. As an example, the {\sl LoRa} radio protocol enables to connect remote sensors at distance as long as ten kilometers in line-of-sight. However, the large area covered also bring several difficulties, such as the placement of sensing devices in regard to geography topology, or the variability of communication latency. Sensing the environment also carries constraints related to the interest of sensing points in relation with a physical phenomenon. Criteria for designs are thus evolving a lot from the existing methods, especially in complex terrains. This article describes simulation techniques based on geography analysis to compute long-range radio coverages and radio characteristics in these situations. As radio propagation is just a particular case of physical phenomena, it is shown how a unified approach also allows to characterize the behavior of potential physical risks. The case of heavy rainfall and flooding is investigated. Geography analysis is achieved using segmentation tools to produce cellular systems which are in turn translated into code for high-performance computations. The paper provides results from practical complex terrain experiments using LoRa, that confirm the accuracy of the simulation, scheduling characteristics for sample networks, and performance tables for simulations on middle range Graphics Processing Units (GPUs).


Sign in / Sign up

Export Citation Format

Share Document