A Novel Approach to Evaluating Leak Detection CPM System Sensitivity/Reliability Performance Trade-Offs

Author(s):  
Shyam Chadha ◽  
Daniel Hung ◽  
Samir Rashid

As defined in American Petroleum Institute Recommended Practice 1130 (API RP 1130), CPM system leak detection performance is evaluated on the basis of four distinct but interrelated metrics: sensitivity, reliability, accuracy and robustness. These performance metrics are captured to evaluate performance, manage risk and prioritize mitigation efforts. Evaluating and quantifying sensitivity performance of a CPM system is paramount to ensure the performance of the CPM system is acceptable based on a company’s risk profile for detecting leaks. Employing API RP 1130 recommended testing methodologies including parameter manipulation techniques, software simulated leak tests and/or removal of test quantities of commodity from the pipeline are excellent approaches to understanding the leak sensitivity metric. Good reliability (false alarm) performance is critical to ensure that control center operator desensitization does not occur through long term exposure to false alarms. Continuous tracking and analyzing of root causes of leak alarms ensures that the effects of seasonal variations or changes to operation on CPM system performance are managed appropriately. The complexity of quantifying this metric includes qualitatively evaluating the relevance of false alarms. The interrelated nature of the above performance metrics imposes conflicting requirements and results in inherent trade-offs. Optimizing the trade-off between reliability and sensitivity involves identifying the point that thresholds must be set to obtain a balance of a desired sensitivity and false alarm rate. This paper presents an approach to illustrate the combined sensitivity/reliability performance for an example pipeline. The paper discusses considerations addressed while determining the methodology such as stakeholder input, ongoing CPM system enhancements, sensitivity/reliability trade-off, risk based capital investment and graphing techniques. The paper also elaborates on a number of identified benefits of the selected overall methodology.

Author(s):  
Chris Dawson ◽  
Stuart Inkpen ◽  
Chris Nolan ◽  
David Bonnell

Many different approaches have been adopted for identifying leaks in pipelines. Leak detection systems, however, generally suffer from a number of difficulties and limitations. For existing and new pipelines, these inevitably force significant trade-offs to be made between detection accuracy, operational range, responsiveness, deployment cost, system reliability, and overall effectiveness. Existing leak detection systems frequently rely on the measurement of secondary effects such as temperature changes, acoustic signatures or flow differences to infer the existence of a leak. This paper presents an alternative approach to leak detection employing electromagnetic measurements of the material in the vicinity of the pipeline that can potentially overcome some of the difficulties encountered with existing approaches. This sensing technique makes direct measurements of the material near the pipeline resulting in reliable detection and minimal risk of false alarms. The technology has been used successfully in other industries to make critical measurements of materials under challenging circumstances. A number of prototype sensors were constructed using this technology and they were tested by an independent research laboratory. The test results show that sensors based on this technique exhibit a strong capability to detect oil, and to distinguish oil from water (a key challenge with in-situ sensors).


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


2017 ◽  
Vol 139 (11) ◽  
pp. 34-39
Author(s):  
Vicki Niesen ◽  
Melissa Gould

This article explores technological advancements for detecting pipeline leaks. An ideal leak detection system should not only quickly detect both small and large leaks, but also do so reliably and not trigger false alarms. Operations in gas pipelines can differ quite a bit from those for liquids, so the experience gained in one type of line may not be entirely applicable when changing jobs. Fortunately, computer simulators are increasingly sophisticated, enabling operators to become comfortable handling a variety of situations. In December 2015, the American Petroleum Institute released a set of guidelines (RP 1175), written by a representative group of hazardous liquid pipeline operators, that established a framework for leak detection management. The focus of the guidelines is getting pipeline operators to use a risk-based approach in their leak detection program, with the goal of uncovering leaks quickly and with certainty. The best-case scenario is for leaks to not occur at all, and the industry is making great strides to keep them from happening. The combination of improved technology and risk-based management should enable operators to keep leaks small and contained, and reduce the impact on the environment as much as possible.


2012 ◽  
Vol 11 (3) ◽  
pp. 118-126 ◽  
Author(s):  
Olive Emil Wetter ◽  
Jürgen Wegge ◽  
Klaus Jonas ◽  
Klaus-Helmut Schmidt

In most work contexts, several performance goals coexist, and conflicts between them and trade-offs can occur. Our paper is the first to contrast a dual goal for speed and accuracy with a single goal for speed on the same task. The Sternberg paradigm (Experiment 1, n = 57) and the d2 test (Experiment 2, n = 19) were used as performance tasks. Speed measures and errors revealed in both experiments that dual as well as single goals increase performance by enhancing memory scanning. However, the single speed goal triggered a speed-accuracy trade-off, favoring speed over accuracy, whereas this was not the case with the dual goal. In difficult trials, dual goals slowed down scanning processes again so that errors could be prevented. This new finding is particularly relevant for security domains, where both aspects have to be managed simultaneously.


2019 ◽  
Author(s):  
Anna Katharina Spälti ◽  
Mark John Brandt ◽  
Marcel Zeelenberg

People often have to make trade-offs. We study three types of trade-offs: 1) "secular trade-offs" where no moral or sacred values are at stake, 2) "taboo trade-offs" where sacred values are pitted against financial gain, and 3) "tragic trade-offs" where sacred values are pitted against other sacred values. Previous research (Critcher et al., 2011; Tetlock et al., 2000) demonstrated that tragic and taboo trade-offs are not only evaluated by their outcomes, but are also evaluated based on the time it took to make the choice. We investigate two outstanding questions: 1) whether the effect of decision time differs for evaluations of decisions compared to decision makers and 2) whether moral contexts are unique in their ability to influence character evaluations through decision process information. In two experiments (total N = 1434) we find that decision time affects character evaluations, but not evaluations of the decision itself. There were no significant differences between tragic trade-offs and secular trade-offs, suggesting that the decisions structure may be more important in evaluations than moral context. Additionally, the magnitude of the effect of decision time shows us that decision time, may be of less practical use than expected. We thus urge, to take a closer examination of the processes underlying decision time and its perception.


2019 ◽  
Author(s):  
Kasper Van Mens ◽  
Joran Lokkerbol ◽  
Richard Janssen ◽  
Robert de Lange ◽  
Bea Tiemens

BACKGROUND It remains a challenge to predict which treatment will work for which patient in mental healthcare. OBJECTIVE In this study we compare machine algorithms to predict during treatment which patients will not benefit from brief mental health treatment and present trade-offs that must be considered before an algorithm can be used in clinical practice. METHODS Using an anonymized dataset containing routine outcome monitoring data from a mental healthcare organization in the Netherlands (n = 2,655), we applied three machine learning algorithms to predict treatment outcome. The algorithms were internally validated with cross-validation on a training sample (n = 1,860) and externally validated on an unseen test sample (n = 795). RESULTS The performance of the three algorithms did not significantly differ on the test set. With a default classification cut-off at 0.5 predicted probability, the extreme gradient boosting algorithm showed the highest positive predictive value (ppv) of 0.71(0.61 – 0.77) with a sensitivity of 0.35 (0.29 – 0.41) and area under the curve of 0.78. A trade-off can be made between ppv and sensitivity by choosing different cut-off probabilities. With a cut-off at 0.63, the ppv increased to 0.87 and the sensitivity dropped to 0.17. With a cut-off of at 0.38, the ppv decreased to 0.61 and the sensitivity increased to 0.57. CONCLUSIONS Machine learning can be used to predict treatment outcomes based on routine monitoring data.This allows practitioners to choose their own trade-off between being selective and more certain versus inclusive and less certain.


Author(s):  
Steven Bernstein

This commentary discusses three challenges for the promising and ambitious research agenda outlined in the volume. First, it interrogates the volume’s attempts to differentiate political communities of legitimation, which may vary widely in composition, power, and relevance across institutions and geographies, with important implications not only for who matters, but also for what gets legitimated, and with what consequences. Second, it examines avenues to overcome possible trade-offs from gains in empirical tractability achieved through the volume’s focus on actor beliefs and strategies. One such trade-off is less attention to evolving norms and cultural factors that may underpin actors’ expectations about what legitimacy requires. Third, it addresses the challenge of theory building that can link legitimacy sources, (de)legitimation practices, audiences, and consequences of legitimacy across different types of institutions.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1643
Author(s):  
Ming Liu ◽  
Shichao Chen ◽  
Fugang Lu ◽  
Mengdao Xing ◽  
Jingbiao Wei

For target detection in complex scenes of synthetic aperture radar (SAR) images, the false alarms in the land areas are hard to eliminate, especially for the ones near the coastline. Focusing on the problem, an algorithm based on the fusion of multiscale superpixel segmentations is proposed in this paper. Firstly, the SAR images are partitioned by using different scales of superpixel segmentation. For the superpixels in each scale, the land-sea segmentation is achieved by judging their statistical properties. Then, the land-sea segmentation results obtained in each scale are combined with the result of the constant false alarm rate (CFAR) detector to eliminate the false alarms located on the land areas of the SAR image. In the end, to enhance the robustness of the proposed algorithm, the detection results obtained in different scales are fused together to realize the final target detection. Experimental results on real SAR images have verified the effectiveness of the proposed algorithm.


2021 ◽  
Vol 10 (4) ◽  
pp. 199
Author(s):  
Francisco M. Bellas Aláez ◽  
Jesus M. Torres Palenzuela ◽  
Evangelos Spyrakos ◽  
Luis González Vilas

This work presents new prediction models based on recent developments in machine learning methods, such as Random Forest (RF) and AdaBoost, and compares them with more classical approaches, i.e., support vector machines (SVMs) and neural networks (NNs). The models predict Pseudo-nitzschia spp. blooms in the Galician Rias Baixas. This work builds on a previous study by the authors (doi.org/10.1016/j.pocean.2014.03.003) but uses an extended database (from 2002 to 2012) and new algorithms. Our results show that RF and AdaBoost provide better prediction results compared to SVMs and NNs, as they show improved performance metrics and a better balance between sensitivity and specificity. Classical machine learning approaches show higher sensitivities, but at a cost of lower specificity and higher percentages of false alarms (lower precision). These results seem to indicate a greater adaptation of new algorithms (RF and AdaBoost) to unbalanced datasets. Our models could be operationally implemented to establish a short-term prediction system.


Author(s):  
Lisa Best ◽  
Kimberley Fung-Loy ◽  
Nafiesa Ilahibaks ◽  
Sara O. I. Ramirez-Gomez ◽  
Erika N. Speelman

AbstractNowadays, tropical forest landscapes are commonly characterized by a multitude of interacting institutions and actors with competing land-use interests. In these settings, indigenous and tribal communities are often marginalized in landscape-level decision making. Inclusive landscape governance inherently integrates diverse knowledge systems, including those of indigenous and tribal communities. Increasingly, geo-information tools are recognized as appropriate tools to integrate diverse interests and legitimize the voices, values, and knowledge of indigenous and tribal communities in landscape governance. In this paper, we present the contribution of the integrated application of three participatory geo-information tools to inclusive landscape governance in the Upper Suriname River Basin in Suriname: (i) Participatory 3-Dimensional Modelling, (ii) the Trade-off! game, and (iii) participatory scenario planning. The participatory 3-dimensional modelling enabled easy participation of community members, documentation of traditional, tacit knowledge and social learning. The Trade-off! game stimulated capacity building and understanding of land-use trade-offs. The participatory scenario planning exercise helped landscape actors to reflect on their own and others’ desired futures while building consensus. Our results emphasize the importance of systematically considering tool attributes and key factors, such as facilitation, for participatory geo-information tools to be optimally used and fit with local contexts. The results also show how combining the tools helped to build momentum and led to diverse yet complementary insights, thereby demonstrating the benefits of integrating multiple tools to address inclusive landscape governance issues.


Sign in / Sign up

Export Citation Format

Share Document