Author(s):  
Steve Ferrier ◽  
Kevin D. Martin ◽  
Donald Schulte

Abstract Application of a formal Failure Analysis metaprocess to a stubborn yield loss problem provided a framework that ultimately facilitated a solution. Absence of results from conventional failure analysis techniques such as PEM (Photon Emission Microscopy) and liquid crystal microthermography frustrated early attempts to analyze this low-level supply leakage failure mode. Subsequently, a reorganized analysis team attacked the problem using a specific toplevel metaprocess.(1,a) Using the metaprocess, analysts generated a specific unique step-by-step analysis process in real time. Along the way, this approach encouraged the creative identification of secondary failure effects that provided repeated breakthroughs in the analysis flow. Analysis proceeded steadily toward the failure cause in spite of its character as a three-way interaction among factors in the IC design, mask generation, and wafer manufacturing processes. The metaprocess also provided the formal structure that, at the conclusion of the analysis, permitted a one-sheet summary of the failure's cause-effect relationships and the analysis flow leading to discovery of the anomaly. As with every application of this metaprocess, the resulting analysis flow simply represented an effective version of good failure analysis. The formal and flexible codification of the analysis decision-making process, however, provided several specific benefits, not least of which was the ability to proceed with high confidence that the problem could and would be solved. This paper describes the application of the metaprocess, and also the key measurements and causeeffect relationships in the analysis.


2011 ◽  
Vol 301-303 ◽  
pp. 989-994
Author(s):  
Fei Wang ◽  
Da Wang ◽  
Hai Gang Yang

Scan chain design is a widely used design-for-testability (DFT) technique to improve test and diagnosis quality. However, failures on scan chain itself account for up to 30% of chip failures. To diagnose root causes of scan chain failures in a short period is vital to failure analysis process and yield improvements. As the conventional diagnosis process usually runs on the faulty free scan chain, scan chain faults may disable the diagnostic process, leaving large failure area to time-consuming failure analysis. In this paper, a SAT-based technique is proposed to generate patterns to diagnose scan chain faults. The proposed work can efficiently generate high quality diagnostic patterns to achieve high diagnosis resolution. Moreover, the computation overhead of proving equivalent faults is reduced. Experimental results on ISCAS’89 benchmark circuits show that the proposed method can reduce the number of diagnostic patterns while achieving high diagnosis resolution.


Author(s):  
Markus Grützner

Abstract In the failure analysis (FA) process, a large amount and variety of data are involved: product information, images created by lab equipment, requester and accounting information, etc. Instead of using separate tools for image handling, sample tracking, report creation, and email, workflows can be improved by integrating all these functions in one single IT system. This paper describes the process and result of developing a dedicated FA software tool. The discussion provides details of the specification, main features, and architecture of the system. This system has been rolled out at 15 labs and is used by approx. 500 analysts and several thousands of company-internal FA customers. The flexible architecture made the adaptation to all the different business processes possible and provides a future-proof solution.


Author(s):  
Shirleen Horley ◽  
Joseph Rascon

Abstract The longer defective units are in the manufacturing pipeline before they are detected, the more expensive it becomes. Economic pressures drive the requirement to capture failures and perform root cause analysis further upstream in the product manufacturing cycle. This places greater emphasis on the ability to identify failures and perform value add analysis to drive product improvements as early as possible. This paper describes the method used to develop a reliable Unified Data Stream (UDS) that feeds the failure analysis process which in turn provides actionable information to product development teams in the Personal Computer (PC) environment. This manuscript describes the development and implementation of the Unified Data Stream designed to replace ambiguity and uncertainty with a defect trend and symptom pareto that drives action upstream. Focus will be on the output of UDS enabling the prioritization of product defects that feed the failure analysis system. Additionally, this paper will touch on the application of the UDS system for different types of pc components. The future of UDS is without bounds as it can also be applied to a wide range of products.


Author(s):  
S.H. Lau ◽  
Wenbing Yun ◽  
Sylvia JY Lewis ◽  
Benjamin Stripe ◽  
Janos Kirz ◽  
...  

Abstract We describe a technique for mapping the distribution and concentrations of trace elements, most notably with capabilities of achieving 1-10 parts per million sensitivities within 1 second and at <8 μm resolution. The technique features an innovative, high flux microstructured x-ray source and a new approach to x-ray optics comprising a high efficiency twin paraboloidal x-ray mirror lens. The resulting ability to acquire dramatically higher sensitivities and resolution than conventional x-ray fluorescence approaches, and at substantially higher throughput enables powerful compositional mapping for failure analysis, process development, and process monitoring.


Author(s):  
Vikash Kumar ◽  
Devraj Karthikeyan

Abstract Fault localization is a common failure analysis process that is used to detect the anomaly on a faulty device. The Infrared Lock-In Thermography (LIT) is one of the localization techniques which can be used on the packaged chips for identifying the heat source which is a result of active damage. This paper extends the idea that the LIT analysis for fault localization is not only limited to the devices within the silicon die but it also highlights thermal failure indications of other components on the PCB (like capacitors, FETs etc on a system level DC-DC μmodule). The case studies presented demonstrate the effectiveness of using LIT in the Failure analysis process of a system level DC-DC μmodule regulator


Author(s):  
Zhaofeng Wang

Abstract The present paper studies several failure mechanisms at both UBM and Cu substrate side for flip-chip die open contact failures in multi-chip-module plastic BGA-LGA packages. A unique failure analysis process flow, starting from non-disturbance inspection of x-ray, substrate and die level C-SAM, bump x-section followed by a bump interface integrity test including under-fill etching and bump pull test and/or substrate etch has been developed. Four different types of failure mechanism in multiple chip module that are associated with open/intermittent contact, ranging from device layout design, UBM forming process defect, to assembly related bump-substrate interface delamination have been identified. The established FA process has been proved to be efficient and accurate with repeatable result. It has facilitated and accelarated new product qualification processes for a line of high power MCM modules.


2012 ◽  
pp. 549-583

Abstract This chapters discusses the basic steps in the failure analysis process. It covers examination procedures, selection and preservation of fracture surfaces, macro and microfractography, metallographic analysis, mechanical testing, chemical analysis, and simulated service testing.


Sign in / Sign up

Export Citation Format

Share Document