Assessing the impact of uncertainty in physics-of-Failure analysis of microelectronics damage

2012 ◽  
Vol 558 ◽  
pp. 259-264 ◽  
Author(s):  
Mei-Ling Wu
Author(s):  
ADITHYA THADURI ◽  
A. K. VERMA ◽  
V. GOPIKA ◽  
RAJESH GOPINATH ◽  
UDAY KUMAR

Reliability prediction using traditional approaches were implemented at earlier stages of electronics. But due to advancements in science and technology, the above models are outdated. The alternative approach, physics of failure provides exhaustive information on basic failure phenomenon with failure mechanisms, failure modes and failure analysis becomes prominent because this method depends on factors like materials, processes, technology, etc., of the component. Constant fraction discriminators which is important component in NFMS needs to study failure characteristics and this paper provides this information on failure characteristics using physics of failure approach. Apart from that, the combined physics of failure approach with the statistical methods such as design of experiments, accelerated testing and failure distribution models to quantify time to failure of this electronic component by radiation and temperature as stress parameters. The SEM analysis of the component is carried out by decapsulating the samples and studied the impact of stress parameters on the device layout.


Author(s):  
Yoav Weizman ◽  
Ezra Baruch

Abstract In recent years, two new techniques were introduced for flip chip debug; the Laser Voltage Probing (LVP) technique and Time Resolved Light Emission Microscopy (TRLEM). Both techniques utilize the silicon’s relative transparency to wavelengths longer than the band gap. This inherent wavelength limitation, together with the shrinking dimensions of modern CMOS devices, limit the capabilities of these tools. It is known that the optical resolution limits of the LVP and TRLEM techniques are bounded by the diffraction limit which is ~1um for both tools using standard optics. This limitation was reduced with the addition of immersion lens optics. Nevertheless, even with this improvement, shrinking transistor geometry is leading to increased acquisition time, and the overlapping effect between adjacent nodes remains a critical issue. The resolution limit is an order of magnitude above the device feature densities in the < 90nm era. The scaling down of transistor geometry is leading to the inevitable consequence where more than 50% of the transistors in 90nm process have widths smaller than 0.4um. The acquisition time of such nodes becomes unreasonably long. In order to examine nodes in a dense logic cuicuit, cross talk and convolution effects between neighboring signals also need to be considered. In this paper we will demonstrate the impact that these effects may have on modern design. In order to maintain the debug capability, with the currently available analytical tools for future technologies, conceptual modification of the FA process is required. This process should start on the IC design board where the VLSI designer should be familiar with FA constraints, and thus apply features that will enable enhanced FA capabilities to the circuit in hand during the electrical design or during the physical design stages. The necessity for reliable failure analysis in real-time should dictate that the designer of advanced VLSI blocks incorporates failure analysis constraints among other design rules. The purpose of this research is to supply the scientific basis for the optimal incorporation of design rules for optical probing in the < 90nm gate era. Circuit designers are usually familiar with the nodes in the design which are critical for debug, and the type of measurement (logic or DC level) they require. The designer should enable the measurement of these signals by applying certain circuit and physical constraints. The implementation of these constraints may be done at the cell level, the block level or during the integration. We will discuss the solutions, which should be considered in order to mitigate tool limitations, and also to enable their use for next generation processes.


2018 ◽  
Vol 924 ◽  
pp. 621-624 ◽  
Author(s):  
Rahul Radhakrishnan ◽  
Nathanael Cueva ◽  
Tony Witt ◽  
Richard L. Woodin

Silicon Carbide JBS diodes are capable, in forward bias, of carrying surge current of magnitude significantly higher than their rated current, for short periods. In this work, we examine the mechanisms of device failure due to excess surge current by analyzing variation of failure current with device current and voltage ratings, as well as duration of current surge. Physical failure analysis is carried out to correlate to electrical failure signature. We also quantify the impact, on surge current capability, of the resistance of the anode ohmic contact to the p-shielding region.


2016 ◽  
Vol 2016 ◽  
pp. 1-6 ◽  
Author(s):  
Kuldeep Verma ◽  
R. M. Belokar

The demand for higher productivity requires machine tools to work on the adequate critical speed to have faster and more accurate ball screw system. Ball screw affects severely over the higher rotation speed of the shaft in computer numeric control (CNC) machining centers. This paper deals with an approach to calculate the initial critical speed of the shaft. Critical speed requires significant attention due to its major use in the manufacturing sectors. The impacts of weight on the critical speed of shaft assembly have been analyzed from theoretical as well as analytical investigations. Additionally, we evaluated the impact of weight on the deflection of the shafts along with failure analysis of shafts with respect to critical speed. Further, we computed the results for critical speed based factor to enhance the accuracy of CNC machining centers. Finally, the analytical estimations have been carried out to prove the validity of our proposal.


2013 ◽  
Vol 387 ◽  
pp. 185-188
Author(s):  
Jian Yu Zhang ◽  
Ming Li ◽  
Li Bin Zhao ◽  
Bin Jun Fei

A progressive damage model (PDM) composed by 3D FEM, Hashin and Ye failure criteria and Changs degradation rules was established to deeply understand the failure of a new material system CCF300/5428 under low velocity impact. User defined subroutines were developed and embedded into the general FEA software package to carry out the failure analysis. Numerical simulations provide more information about the failure of composite laminates under low velocity impact, including initial damage status, damage propagation and final failure status. The history of the impact point displacement and various damage patterns were detailed studied.


2005 ◽  
Vol 128 (1) ◽  
pp. 140-147 ◽  
Author(s):  
Jeffrey T. Fong ◽  
James J. Filliben ◽  
Roland deWit ◽  
Richard J. Fields ◽  
Barry Bernstein ◽  
...  

In this paper, we first review the impact of the powerful finite element method (FEM) in structural engineering, and then address the shortcomings of FEM as a tool for risk-based decision making and incomplete-data-based failure analysis. To illustrate the main shortcoming of FEM, i.e., the computational results are point estimates based on “deterministic” models with equations containing mean values of material properties and prescribed loadings, we present the FEM solutions of two classical problems as reference benchmarks: (RB-101) The bending of a thin elastic cantilever beam due to a point load at its free end and (RB-301) the bending of a uniformly loaded square, thin, and elastic plate resting on a grillage consisting of 44 columns of ultimate strengths estimated from 5 tests. Using known solutions of those two classical problems in the literature, we first estimate the absolute errors of the results of four commercially available FEM codes (ABAQUS, ANSYS, LSDYNA, and MPAVE) by comparing the known with the FEM results of two specific parameters, namely, (a) the maximum displacement and (b) the peak stress in a coarse-meshed geometry. We then vary the mesh size and element type for each code to obtain grid convergence and to answer two questions on FEM and failure analysis in general: (Q-1) Given the results of two or more FEM solutions, how do we express uncertainty for each solution and the combined? (Q-2) Given a complex structure with a small number of tests on material properties, how do we simulate a failure scenario and predict time to collapse with confidence bounds? To answer the first question, we propose an easy-to-implement metrology-based approach, where each FEM simulation in a grid-convergence sequence is considered a “numerical experiment,” and a quantitative uncertainty is calculated for each sequence of grid convergence. To answer the second question, we propose a progressively weakening model based on a small number (e.g., 5) of tests on ultimate strength such that the failure of the weakest column of the grillage causes a load redistribution and collapse occurs only when the load redistribution leads to instability. This model satisfies the requirement of a metrology-based approach, where the time to failure is given a quantitative expression of uncertainty. We conclude that in today’s computing environment and with a precomputational “design of numerical experiments,” it is feasible to “quantify” uncertainty in FEM modeling and progressive failure analysis.


Sign in / Sign up

Export Citation Format

Share Document