scholarly journals Evaluating the Probability of Detection Capability of Permanently Installed Sensors Using a Structural Integrity Informed Approach

2021 ◽  
Vol 40 (3) ◽  
Author(s):  
Michael Siu Hey Leung ◽  
Joseph Corcoran

AbstractThere is a growing interest in using permanently installed sensors to monitor for defects in engineering components; the ability to collect real-time measurements is valuable when evaluating the structural integrity of the monitored component. However, a challenge in evaluating the detection capabilities of a permanently installed sensor arises from its fixed location and finite field-of-view, combined with the uncertainty in damage location. A probabilistic framework for evaluating the detection capabilities of a permanently installed sensor is thus proposed. By combining the spatial maps of sensor sensitivity obtained from model-assisted methods and probability of defect location obtained from structural mechanics, the expectation and confidence in the probability of detection (POD) can be estimated. The framework is demonstrated with four sensor-component combinations, and the results show the ability of the framework to characterise the detection capability of permanently installed sensors and quantify its performance with metrics such as the $${\mathrm{a}}_{90|95}$$ a 90 | 95 value (the defect size where there is 95% confidence of obtaining at least 90% POD), which is valuable for structural integrity assessments as a metric for the largest defect that may be present and undetected. The framework is thus valuable for optimising and qualifying monitoring system designs in real-life engineering applications.

2021 ◽  
pp. 147592172110388
Author(s):  
Michael Siu Hey Leung ◽  
Joseph Corcoran

The value of using permanently installed monitoring systems for managing the life of an engineering asset is determined by the confidence in its damage detection capabilities. A framework is proposed that integrates detection data from permanently installed monitoring systems with probabilistic structural integrity assessments. Probability of detection (POD) curves are used in combination with particle filtering methods to recursively update a distribution of postulated defect size given a series of negative results (i.e. no defects detected). The negative monitoring results continuously filter out possible cases of severe damage, which in turn updates the estimated probability of failure. An implementation of the particle filtering method that takes into account the effect of systematic uncertainty in the detection capabilities of a monitoring system is also proposed, addressing the problem of whether negative measurements are simply a consequence of defects occurring outside the sensors field of view. A simulated example of fatigue crack growth is used to demonstrate the proposed framework. The results demonstrate that permanently installed sensors with low susceptibility to systematic effects may be used to maintain confidence in fitness-for-service while relying on fewer inspections. The framework provides a method for using permanently installed sensors to achieve continuous assessments of fitness-for-service for improved integrity management.


1998 ◽  
Vol 120 (4) ◽  
pp. 365-373 ◽  
Author(s):  
F. A. Simonen ◽  
M. A. Khaleel

This paper describes probabilistic fracture mechanics calculations that simulate fatigue crack growth, flaw detection, flaw sizing accuracy, and the impacts of flaw acceptance criteria. The numerical implementation of the model is based on a Latin hypercube approach. Calculations have been performed for a range of parameters. For representative values of flaw detection probability, flaw sizing errors, and flaw acceptance criteria, detection capability is the most limiting factor with regard to the ability of the inservice inspections to reduce leak probabilities. However, gross sizing errors or significant relaxations of current flaw acceptance standards could negate the benefits of outstanding probability of detection capabilities.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 875
Author(s):  
Jesus Cerquides ◽  
Mehmet Oğuz Mülâyim ◽  
Jerónimo Hernández-González ◽  
Amudha Ravi Shankar ◽  
Jose Luis Fernandez-Marquez

Over the last decade, hundreds of thousands of volunteers have contributed to science by collecting or analyzing data. This public participation in science, also known as citizen science, has contributed to significant discoveries and led to publications in major scientific journals. However, little attention has been paid to data quality issues. In this work we argue that being able to determine the accuracy of data obtained by crowdsourcing is a fundamental question and we point out that, for many real-life scenarios, mathematical tools and processes for the evaluation of data quality are missing. We propose a probabilistic methodology for the evaluation of the accuracy of labeling data obtained by crowdsourcing in citizen science. The methodology builds on an abstract probabilistic graphical model formalism, which is shown to generalize some already existing label aggregation models. We show how to make practical use of the methodology through a comparison of data obtained from different citizen science communities analyzing the earthquake that took place in Albania in 2019.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Vincent Vandewalle ◽  
Alexandre Caron ◽  
Coralie Delettrez ◽  
Renaud Périchon ◽  
Sylvia Pelayo ◽  
...  

Abstract Background Usability testing of medical devices are mandatory for market access. The testings’ goal is to identify usability problems that could cause harm to the user or limit the device’s effectiveness. In practice, human factor engineers study participants under actual conditions of use and list the problems encountered. This results in a binary discovery matrix in which each row corresponds to a participant, and each column corresponds to a usability problem. One of the main challenges in usability testing is estimating the total number of problems, in order to assess the completeness of the discovery process. Today’s margin-based methods fit the column sums to a binomial model of problem detection. However, the discovery matrix actually observed is truncated because of undiscovered problems, which corresponds to fitting the marginal sums without the zeros. Margin-based methods fail to overcome the bias related to truncation of the matrix. The objective of the present study was to develop and test a matrix-based method for estimating the total number of usability problems. Methods The matrix-based model was based on the full discovery matrix (including unobserved columns) and not solely on a summary of the data (e.g. the margins). This model also circumvents a drawback of margin-based methods by simultaneously estimating the model’s parameters and the total number of problems. Furthermore, the matrix-based method takes account of a heterogeneous probability of detection, which reflects a real-life setting. As suggested in the usability literature, we assumed that the probability of detection had a logit-normal distribution. Results We assessed the matrix-based method’s performance in a range of settings reflecting real-life usability testing and with heterogeneous probabilities of problem detection. In our simulations, the matrix-based method improved the estimation of the number of problems (in terms of bias, consistency, and coverage probability) in a wide range of settings. We also applied our method to five real datasets from usability testing. Conclusions Estimation models (and particularly matrix-based models) are of value in estimating and monitoring the detection process during usability testing. Matrix-based models have a solid mathematical grounding and, with a view to facilitating the decision-making process for both regulators and device manufacturers, should be incorporated into current standards.


2021 ◽  
Author(s):  
Onome Scott-Emuakpor ◽  
Tommy George ◽  
Brian Ruynon ◽  
Andrew Goldin ◽  
Casey Holycross ◽  
...  

2021 ◽  
Vol 143 (4) ◽  
Author(s):  
Yinsheng Li ◽  
Genshichiro Katsumata ◽  
Koichi Masaki ◽  
Shotaro Hayashi ◽  
Yu Itabashi ◽  
...  

Abstract Nowadays, it has been recognized that probabilistic fracture mechanics (PFM) is a promising methodology in structural integrity assessments of aged pressure boundary components of nuclear power plants, because it can rationally represent the influencing parameters in their inherent probabilistic distributions without over conservativeness. A PFM analysis code PFM analysis of structural components in aging light water reactor (PASCAL) has been developed by the Japan Atomic Energy Agency to evaluate the through-wall cracking frequencies of domestic reactor pressure vessels (RPVs) considering neutron irradiation embrittlement and pressurized thermal shock (PTS) transients. In addition, efforts have been made to strengthen the applicability of PASCAL to structural integrity assessments of domestic RPVs against nonductile fracture. A series of activities has been performed to verify the applicability of PASCAL. As a part of the verification activities, a working group was established with seven organizations from industry, universities, and institutes voluntarily participating as members. Through one-year activities, the applicability of PASCAL for structural integrity assessments of domestic RPVs was confirmed with great confidence. This paper presents the details of the verification activities of the working group, including the verification plan, approaches, and results.


Author(s):  
N. A. Leggatt ◽  
R. J. Dennis ◽  
P. J. Bouchard ◽  
M. C. Smith

Numerical methods have been established to simulate welding processes. Of particular interest is the ability to predict residual stress fields. These fields are often used in support of structural integrity assessments where they have the potential, when accurately characterised, to offer significantly less conservative predictions of residual profiles compared to those found in assessment codes such as API 579, BS7910 and R6. However, accurate predictions of residual stress profiles that compare favourably with measurements do not necessarily suggest an accurate prediction of component distortions. This paper presents a series of results that compare predicted distortions for a variety of specimen mock-ups with measurements. A range of specimen thicknesses will be studied including, a 4mm thick DH-36 ferritic plate containing a single bead, a 4mm thick DH-36 ferritic plate containing fillet welds, a 25mm thick 316L austenitic plate containing a groove weld and a 35mm thick esshete 1250 austenitic disc containing a concentric ring weld. For each component, distortion measurements have been compared with the predicted distortions with a number of key features being investigated. These include the influence of ‘small’ vs ‘large’ strain deformation theory, the ability to predict distortions using simplified analysis methods such as simultaneous bead deposition and the influence of specimen thickness on the requirement for particular analysis features. The work provides an extremely useful insight into how existing numerical methods used to predict residual stress fields can be utilised to predict the distortions that occur as a result of the welding fabrication process.


Sign in / Sign up

Export Citation Format

Share Document