Human Factor Modelling in the Pipeline Industry

Author(s):  
Susan Urra ◽  
Jessica Green

Most pipeline leaks and ruptures can be attributed to some degree to human factors. Therefore, identifying, measuring and improving areas of potential human factor issues can greatly decrease the risk of pipeline failure. ‘Human factors’ refers to the study of various aspects of human characteristics and job experience, job and task design, tools and equipment design, and work environment which can affect pipeline operations and overall system performance. Enbridge Pipelines has developed a risk assessment model that assesses the risk of human factors along the company’s nationwide liquid pipeline system. The Human Factors Risk Assessment Model generates a risk score for each aspect of the pipeline as well as an overall risk score which highlights the business areas of highest concern. The implementation of the model included the execution of a pilot study to calibrate the model. To perform the pilot, data was collected from the control center, field and office locations through different methodologies such as survey, interview and databases available. The results from the control room operation surveys indicate that the main areas of human error potential in the control room can be mitigated by decreasing the number of manual calculations the operators have to complete and ensuring operators aren’t taking on extra work that should be completed by other areas. These workload improvements would decrease the chance of an operator having to complete two or more control operations at the same time. Lastly, controlling the amount of phone activities that interfere with monitoring and control operations also gives an opportunity to reduce the potential for human error in the control room. Improvements that can be made in the office to reduce human error potential include the development of a human factors standard and improving the critical procedure observation and management of change systems. Measuring, acknowledging and mitigating human factor risks at Enbridge will yield a decrease in the risk of pipeline failure across the entire liquid pipeline system.

Author(s):  
Jerico Perez ◽  
David Weir ◽  
Caroline Seguin ◽  
Refaul Ferdous

To the end of 2012, Enbridge Pipelines employed an in-house developed indexed or relative risk assessment algorithm to model its liquid pipeline system. Using this model, Enbridge was able to identify risk control or treatment projects (e.g. valve placement) that could mitigate identified high risk areas. A changing understanding of the threats faced by a liquid pipeline system and their consequences meant that the model changed year over year making it difficult to demonstrate risk reduction accomplished on an annual basis using a relative scoring system. As the development of risk management evolved within the company, the expectations on the model also evolved and significantly increased. For example, questions were being asked such as “what risk is acceptable and what risk is not acceptable?”, “what is the true risk of failure for a given pipe section that considers the likelihood of all threats applicable to the pipeline”, and “is enough being done to reduce these risks to acceptable levels?” To this end, starting in 2012 and continuing through to the end of 2013, Enbridge Pipelines developed a quantitative mainline risk assessment model. This tool quantifies both threat likelihood and consequence and offers advantages over the indexed risk assessment model in the following areas: • Models likely worst case (P90) rupture scenarios • Enables independent evaluation of threats and consequences in order to understand the drivers • Produces risk assessment results in uniform units for all consequence criteria and in terms of frequencies of failure for likelihood • Aggregates likelihood and consequence at varying levels of granularity • Uses the risk appetite of the organization and its quantification allows for the setting of defined high, medium, and low risk targets • Quantifies the amount of risk in dollars/year facilitating cost-benefit analyses of mitigation efforts and risk reduction activities • Grounds risk assessment results on changes in product volume-out and receptor sensitivity • Balances between complexity and utility by using enough information and data granularity to capture all factors that have a meaningful impact on risk Development and implementation of the quantitative mainline risk assessment tool has had a number of challenges and hurdles. This paper provides an overview of the approach used by Enbridge to develop its quantitative mainline risk assessment model and examines the challenges, learnings and successes that have been achieved in its implementation.


Author(s):  
Mashrura Musharraf ◽  
Faisal Khan ◽  
Brian Veitch ◽  
Scott MacKinnon ◽  
Syed Imtiaz

This paper presents a quantitative approach to human factors risk analysis during emergency conditions on an offshore petroleum facility located in a harsh environment. Due to the lack of human factors data for emergency conditions, most of the available human factors risk assessment methodologies are based on expert judgment techniques. Expert judgment is a valuable technique, however, it suffers from vagueness, subjectivity and incompleteness due to a lack of supporting empirical evidence. These weaknesses are often not accounted for in conventional human factors risk assessment. The available approaches also suffer from the unrealistic assumption of independence of the human performance shaping (HPS) factors and actions. The focus of this paper is to address the issue of handling uncertainty associated with expert judgments and to account for the dependency among the HPS factors and actions. These outcomes are achieved by integrating Bayesian Networks with Fuzzy and Evidence theories to estimate human error probabilities during different phases of an emergency. To test the applicability of the approach, results are compared with an analytical approach. The study demonstrates that the proposed approach is effective in assessing human error probability, which in turn improves reliability and auditability of human factors risk assessment.


2010 ◽  
Vol 151 (34) ◽  
pp. 1365-1374 ◽  
Author(s):  
Marianna Dávid ◽  
Hajna Losonczy ◽  
Miklós Udvardy ◽  
Zoltán Boda ◽  
György Blaskó ◽  
...  

A kórházban kezelt sebészeti és belgyógyászati betegekben jelentős a vénásthromboembolia-rizikó. Profilaxis nélkül, a műtét típusától függően, a sebészeti beavatkozások kapcsán a betegek 15–60%-ában alakul ki mélyvénás trombózis vagy tüdőembólia, és az utóbbi ma is vezető kórházi halálok. Bár a vénás thromboemboliát leggyakrabban a közelmúltban végzett műtéttel vagy traumával hozzák kapcsolatba, a szimptómás thromboemboliás események 50–70%-a és a fatális tüdőembóliák 70–80%-a nem a sebészeti betegekben alakul ki. Nemzetközi és hazai felmérések alapján a nagy kockázattal rendelkező sebészeti betegek többsége megkapja a szükséges trombózisprofilaxist. Azonban profilaxis nélkül marad a rizikóval rendelkező belgyógyászati betegek jelentős része, a konszenzuson alapuló nemzetközi és hazai irányelvi ajánlások ellenére. A belgyógyászati betegek körében növelni kell a profilaxisban részesülők arányát és el kell érni, hogy trombózisrizikó esetén a betegek megkapják a hatásos megelőzést. A beteg trombóziskockázatának felmérése fontos eszköze a vénás thromboembolia által veszélyeztetett betegek felderítésének, megkönnyíti a döntést a profilaxis elrendeléséről és javítja az irányelvi ajánlások betartását. A trombózisveszély megállapításakor, ha nem ellenjavallt, profilaxist kell alkalmazni. „A thromboemboliák kockázatának csökkentése és kezelése” című, 4. magyar antithromboticus irányelv felhívja a figyelmet a vénástrombózis-rizikó felmérésének szükségességére, és elsőként tartalmazza a kórházban fekvő belgyógyászati és sebészeti betegek kockázati kérdőívét. Ismertetjük a kockázatbecslő kérdőíveket és áttekintjük a kérdőívekben szereplő rizikófaktorokra vonatkozó bizonyítékokon alapuló adatokat.


Author(s):  
C.K. Lakshminarayan ◽  
S. Pabbisetty ◽  
O. Adams ◽  
F. Pires ◽  
M. Thomas ◽  
...  

Abstract This paper deals with the basic concepts of Signature Analysis and the application of statistical models for its implementation. It develops a scheme for computing sample sizes when the failures are random. It also introduces statistical models that comprehend correlations among failures that fail due to the same failure mechanism. The idea of correlation is important because semiconductor chips are processed in batches. Also any risk assessment model should comprehend correlations over time. The statistical models developed will provide the required sample sizes for the Failure Analysis lab to state "We are A% confident that B% of future parts will fail due to the same signature." The paper provides tables and graphs for the evaluation of such a risk assessment. The implementation of Signature Analysis will achieve the dual objective of improved customer satisfaction and reduced cycle time. This paper will also highlight it's applicability as well as the essential elements that need to be in place for it to be effective. Different examples have been illustrated of how the concept is being used by Failure Analysis Operations (FA) and Customer Quality and Reliability Engineering groups.


2013 ◽  
Vol 19 (3) ◽  
pp. 521-527 ◽  
Author(s):  
Song YANG ◽  
Shuqin WU ◽  
Ningqiu LI ◽  
Cunbin SHI ◽  
Guocheng DENG ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document