scholarly journals An Ensembled Anomaly Detector for Wafer Fault Detection

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5465
Author(s):  
Giuseppe Furnari ◽  
Francesco Vattiato ◽  
Dario Allegra ◽  
Filippo Luigi Maria Milotta ◽  
Alessandro Orofino ◽  
...  

The production process of a wafer in the semiconductor industry consists of several phases such as a diffusion and associated defectivity test, parametric test, electrical wafer sort test, assembly and associated defectivity tests, final test, and burn-in. Among these, the fault detection phase is critical to maintain the low number and the impact of anomalies that eventually result in a yield loss. The understanding and discovery of the causes of yield detractors is a complex procedure of root-cause analysis. Many parameters are tracked for fault detection, including pressure, voltage, power, or valve status. In the majority of the cases, a fault is due to a combination of two or more parameters, whose values apparently stay within the designed and checked control limits. In this work, we propose an ensembled anomaly detector which combines together univariate and multivariate analyses of the fault detection tracked parameters. The ensemble is based on three proposed and compared balancing strategies. The experimental phase is conducted on two real datasets that have been gathered in the semiconductor industry and made publicly available. The experimental validation, also conducted to compare our proposal with other traditional anomaly detection techniques, is promising in detecting anomalies retaining high recall with a low number of false alarms.

2020 ◽  
Author(s):  
Giacomo Roversi ◽  
Pier Paolo Alberoni ◽  
Anna Fornasiero ◽  
Federico Porcù

Abstract. There is a growing interest in emerging opportunistic sensors for precipitation estimates, motivated by the need to describe with detail precipitation structures. In this work a preliminary assessment of the accuracy of Commercial Microwave Links (CMLs) retrieved rainfall rates in northern Italy is presented. The CML product, obtained by the publicly available RAINLINK package, is evaluated at different scales (single link, 5 km x 5 km grid, river basin) against the precipitation products operationally used at Arpae-SIMC, the Regional Weather Service of Emilia-Romagna, in northern Italy. The results of the 15 min single-link validation with close-by raingauges show high variability, with influence of the area physiography and precipitation patterns and the impact of some known issues (e.g. melting layer). However, hourly cumulated spatially interpolated CML rainfall maps, validated with respect to the established regional gauge-based reference, show performances (R2 of 0.47 and CV of 0.77) which are very similar, when not even better, to satellite- and adjusted radar-based precipitation gridded products. This is especially true when basin-scale total precipitation amounts are considered (R2 of 0.85 and CV of 0.63). Taking into account also delays in the availability of the data (latency of 0.33 hours for CML against 1 hour for the adjusted radar and 24 h for the quality controlled raingauges), CMLs appear as a valuable data source in particular from a local operational framework perspective. A diffuse underestimation is evident at both grid box (Mean Error of −0.26) and basin scale (Multiplicative Bias of 0.7), while the number of false alarms is generally low and gets even lower as coverage increases. Finally, results show complementary strengths for CMLs and radars, encouraging a joint exploitation.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2448 ◽  
Author(s):  
Amin Ghafouri ◽  
Aron Laszka ◽  
Koutsoukos

Detection errors such as false alarms and undetected faults are inevitable in any practical anomaly detection system. These errors can create potentially significant problems in the underlying application. In particular, false alarms can result in performing unnecessary recovery actions while missed detections can result in failing to perform recovery which can lead to severe consequences. In this paper, we present an approach for application-aware anomaly detection (AAAD). Our approach takes an existing anomaly detector and configures it to minimize the impact of detection errors. The configuration of the detectors is chosen so that application performance in the presence of detection errors is as close as possible to the performance that could have been obtained if there were no detection errors. We evaluate our result using a case study of real-time control of traffic signals, and show that the approach outperforms significantly several baseline detectors.


2005 ◽  
Vol 18 (20) ◽  
pp. 4271-4286 ◽  
Author(s):  
Matthew J. Menne ◽  
Claude N. Williams

Abstract An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite. In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.


Author(s):  
Thein Gi Kyaw ◽  
Anant Choksuriwong ◽  
Nikom Suvonvorn

Fall detection techniques for helping the elderly were developed based on identifying falling states using simulated falls. However, some real-life falling states were left undetected, which led to this work on analysing falling states. The aim was to find the differences between active daily living and soft falls where falling states were undetected. This is the first consideration to be based on the threshold-based algorithms using the acceleration data stored in an activity database. This study addresses soft falls in addition to the general falls based on two falling states. Despite the number of false alarms being higher rising from 18.5% to 56.5%, the sensitivity was increased from 52% to 92.5% for general falls, and from 56% to 86% for soft falls. Our experimental results show the importance of state occurrence for soft fall detection, and will be used to build a learning model for soft fall detection.


Author(s):  
E. Ouerghi ◽  
T. Ehret ◽  
C. de Franchis ◽  
G. Facciolo ◽  
T. Lauvaux ◽  
...  

Abstract. Reducing methane emissions is essential to tackle climate change. Here, we address the problem of detecting large methane leaks using hyperspectral data from the Sentinel-5P satellite. For that we exploit the fine spectral sampling of Sentinel-5P data to detect methane absorption features visible in the shortwave infrared wavelength range (SWIR). Our method involves three separate steps: i) background subtraction, ii) detection of local maxima in the negative logarithmic spectrum of each pixel and iii) anomaly detection in the background-free image. In the first step, we remove the impact of the albedo using albedo maps and the impact of the atmosphere by using a principal component analysis (PCA) over a time series of past observations. In the second step, we count for each pixel the number of local maxima that correspond to a subset of local maxima in the methane absorption spectrum. This counting method allows us to set up a statistical a contrario test that controls the false alarm rate of our detections. In the last step we use an anomaly detector to isolate potential methane plumes and we intersect those potential plumes with what have been detected in the second step. This process strongly reduces the number of false alarms. We validate our method by comparing the detected plumes against a dataset of plumes manually annotated on the Sentinel-5P L2 methane product.


2014 ◽  
Vol 660 ◽  
pp. 971-975 ◽  
Author(s):  
Mohd Norzaim bin Che Ani ◽  
Siti Aisyah Binti Abdul Hamid

Time study is the process of observation which concerned with the determination of the amount of time required to perform a unit of work involves of internal, external and machine time elements. Originally, time study was first starting to be used in Europe since 1760s in manufacturing fields. It is the flexible technique in lean manufacturing and suitable for a wide range of situations. Time study approach that enable of reducing or minimizing ‘non-value added activities’ in the process cycle time which contribute to bottleneck time. The impact on improving process cycle time for organization that it was increasing the productivity and reduce cost. This project paper focusing on time study at selected processes with bottleneck time and identify the possible root cause which was contribute to high time required to perform a unit of work.


Processes ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 166
Author(s):  
Majed Aljunaid ◽  
Yang Tao ◽  
Hongbo Shi

Partial least squares (PLS) and linear regression methods are widely utilized for quality-related fault detection in industrial processes. Standard PLS decomposes the process variables into principal and residual parts. However, as the principal part still contains many components unrelated to quality, if these components were not removed it could cause many false alarms. Besides, although these components do not affect product quality, they have a great impact on process safety and information about other faults. Removing and discarding these components will lead to a reduction in the detection rate of faults, unrelated to quality. To overcome the drawbacks of Standard PLS, a novel method, MI-PLS (mutual information PLS), is proposed in this paper. The proposed MI-PLS algorithm utilizes mutual information to divide the process variables into selected and residual components, and then uses singular value decomposition (SVD) to further decompose the selected part into quality-related and quality-unrelated components, subsequently constructing quality-related monitoring statistics. To ensure that there is no information loss and that the proposed MI-PLS can be used in quality-related and quality-unrelated fault detection, a principal component analysis (PCA) model is performed on the residual component to obtain its score matrix, which is combined with the quality-unrelated part to obtain the total quality-unrelated monitoring statistics. Finally, the proposed method is applied on a numerical example and Tennessee Eastman process. The proposed MI-PLS has a lower computational load and more robust performance compared with T-PLS and PCR.


Author(s):  
I.F. Lozovskiy

The use of broadband souding signals in radars, which has become real in recent years, leads to a significant reduction in the size of resolution elements in range and, accordingly, in the size of the window in which the training sample is formed, which is used to adapt the detection threshold in signal detection algorithms with a constant level of false alarms. In existing radars, such a window would lead to huge losses. The purpose of the work was to study the most rational options for constructing detectors with a constant level of false alarms in radars with broadband sounding signals. The problem was solved for the Rayleigh distribution of the envelope of the noise and a number of non-Rayleigh laws — Weibull and the lognormal, the appearance of which is associated with a decrease in the number of reflecting elements in the resolution volume. For Rayleigh interference, an algorithm is proposed with a multi-channel in range incoherent signal amplitude storage and normalization to the larger of the two estimates of the interference power in the range segments. The detection threshold in it adapts not only to the interference power, but also to the magnitude of the «power jump» in range, which allows reducing the number of false alarms during sudden changes in the interference power – the increase in the probability of false alarms did not exceed one order of magnitude. In this algorithm, there is a certain increase in losses associated with incoherent accumulation of signals reflected from target elements, and losses can be reduced by certain increasing the size of the distance segments that make up the window. Algorithms for detecting broadband signals against interference with non-Rayleigh laws of distribution of the envelope – Weibull and lognormal, based on the addition of the algorithm for detecting signals by non-linear transformation of sample counts into counts with a Rayleigh distribution, are studied. The structure of the detection algorithm remains unchanged in practice. The options for detectors of narrowband and broadband signals are considered. It was found that, in contrast to algorithms designed for the Rayleigh distribution, these algorithms provide a stable level of false alarms regardless of the values of the parameters of non-Rayleigh interference. To reduce losses due to interference with the distribution of amplitudes according to the Rayleigh law, detectors consisting of two channels are used, in which one of the channels is tuned for interference with the Rayleigh distribution, and the other for lognormal or Weibull interference. Channels are switched according to special distribution type recognition algorithms. In such detectors, however, there is a certain increase in the probability of false alarms in a rather narrow range of non-Rayleigh interference parameters, where their distribution approaches the Rayleigh distribution. It is shown that when using broadband signals, there is a noticeable decrease in detection losses in non-Rayleigh noise due to lower detection thresholds for in range signal amplitudes incoherent storage.


Sign in / Sign up

Export Citation Format

Share Document