Approach for the Evaluation of the Impact of Potential Software Failures in Software-Based Instrumentation and Control (I&C) Equipment in Nuclear Power Plants

Author(s):  
Hervé Mbonjo ◽  
Manuela Jopen ◽  
Birte Ulrich ◽  
Dagmar Sommer

In this paper we present an approach for the evaluation and assessment of the impact of software failures in software-based I&C systems of NPPs. The proposed two-step approach includes at the first step the identification of software failure modes on the basis of review of operating experience gained with software-based I&C systems and equipment. All probable software failures in software-based I&C systems should be identified and classified according to e. g. the concerned system, the observed software failure mode and to their actual and potential safety relevance. In a second step an evaluation of the potential impact of identified safety relevant software failure modes in a software-based I&C system shall be performed. The evaluation shall be done by means of a failure mode and effects analysis (FMEA) using a generic model of the software-based I&C system, i.e. software failure modes are postulated in the I&C system and their potential safety-relevant impact is analyzed.

Author(s):  
Bruce Geddes ◽  
Ray Torok

The Electric Power Research Institute (EPRI) is conducting research in cooperation with the Nuclear Energy Institute (NEI) regarding Operating Experience of digital Instrumentation and Control (I&C) systems in US nuclear power plants. The primary objective of this work is to extract insights from US nuclear power plant Operating Experience (OE) reports that can be applied to improve Diversity and Defense in Depth (D3) evaluations and methods for protecting nuclear plants against I&C related Common Cause Failures (CCF) that could disable safety functions and thereby degrade plant safety. Between 1987 and 2007, over 500 OE events involving digital equipment in US nuclear power plants were reported through various channels. OE reports for 324 of these events were found in databases maintained by the Nuclear Regulatory Commission (NRC) and the Institute of Nuclear Power Operations (INPO). A database was prepared for capturing the characteristics of each of the 324 events in terms of when, where, how, and why the event occurred, what steps were taken to correct the deficiency that caused the event, and what defensive measures could have been employed to prevent recurrence of these events. The database also captures the plant system type, its safety classification, and whether or not the event involved a common cause failure. This work has revealed the following results and insights: - 82 of the 324 “digital” events did not actually involve a digital failure. Of these 82 non-digital events, 34 might have been prevented by making full use of digital system fault tolerance features. - 242 of the 324 events did involve failures in digital systems. The leading contributors to the 242 digital failures were hardware failure modes. Software change appears as a corrective action twice as often as it appears as an event root cause. This suggests that software features are being added to avoid recurrence of hardware failures, and that adequately designed software is a strong defensive measure against hardware failure modes, preventing them from propagating into system failures and ultimately plant events. 54 of the 242 digital failures involved a Common Cause Failure (CCF). - 13 of the 54 CCF events affected safety (1E) systems, and only 2 of those were due to Inadequate Software Design. This finding suggests that software related CCFs on 1E systems are no more prevalent than other CCF mechanisms for which adherence to various regulations and standards is considered to provide adequate protection against CCF. This research provides an extensive data set that is being used to investigate many different questions related to failure modes, causes, corrective actions, and other event attributes that can be compared and contrasted to reveal useful insights. Specific considerations in this study included comparison of 1E vs. non-1E systems, active vs. potential CCFs, and possible defensive measures to prevent these events. This paper documents the dominant attributes of the evaluated events and the associated insights that can be used to improve methods for protecting against digital I&C related CCFs, applying a test of reasonable assurance.


Author(s):  
Robert Arians ◽  
Simone Arnold ◽  
Christian Mueller ◽  
Claudia Quester ◽  
Dagmar Sommer

The reliability of the auxiliary power supply of a nuclear power plant (NPP) is of high importance for safe operation. The loss of the electrical power supply is one of the major contributions to the calculated core damage frequency in probabilistic safety assessments. Among others, the events in Forsmark in 2006 [1] and 2012 [2] as well as in Byron in 2012 [3] illustrate that disturbances in the external power grid can propagate into the NPP and have an impact on the safety important electrical equipment. Therefore, the grid reliability contributes considerably to the reliability of the auxiliary power supply. In the research work presented in this paper the international operating experience has been evaluated concerning events which include disturbance in the external grid to discover those types of grid disturbances which may have influence on the safe operation of the NPPs. The identified events have then been categorized within a developed classification scheme to determine those with the highest relevance. Based on this scheme representative scenarios of grid disturbances have been developed. The investigation of the impact of the developed scenarios on the electrical equipment of NPPs will be performed using a grid analysis, planning and optimization tool which also allows executing dynamic simulations of electrical grids [4]. Therefore, a generalized auxiliary power supply of a pressurized water reactor was modeled according to German NPPs of the type Konvoi. In this paper, an overview of the developed scenarios of grid disturbances and the actual status of the simulation of the auxiliary power supply of NPPs is presented.


Author(s):  
Steven R. Doctor ◽  
Michael T. Anderson

A major thrust in the past 20 years has been to upgrade nondestructive examinations (NDE) for use in inservice inspection (ISI) programs to more effectively manage degradation at operating nuclear power plants. Risk-informed ISI (RI-ISI) is one of the outcomes of this work, and this approach relies heavily on the reliability of NDE, when properly applied, to detect sources of expected degradation. There have been a number of improvements in the reliability of NDE, specifically in ultrasonic testing (UT), through training of examiners, and improved equipment and procedure development. However, the most significant improvements in UT were derived by moving from prescriptive requirements to performance based requirements. Even with these substantial improvements, NDE contains significant uncertainties and RI-ISI programs need to address and accommodate this factor. As part of the work that PNNL is conducting for the U. S. Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, we are examining the impact of these uncertainties on the effectiveness of RI-ISI programs. One of the primary objectives of in-service inspection, including a RI-ISI program, is to manage potential degradation that may occur, but that had not been foreseen through previous operating experience. However, RI-ISI programs in the U.S are primarily based on history, looking back at past failures in the operating fleet. Therefore, RI-ISI may not adequately manage degradation events that are yet to occur, such as those that may have a long incubation (initiation) time, but a potentially fast growth rate. For this reason, RI-ISI will always be reactive to such failure events. Successful ISI needs to determine what NDE is required, when and how frequently it needs to be applied, how effective the NDE must be and where the NDE needs to be applied. Both flaw detection and accurate characterization need to be addressed. This paper will examine the reliability and uncertainties of NDE, and how these may impact RI-ISI.


2019 ◽  
Vol 7 (2B) ◽  
Author(s):  
Vanderley Vasconcelos ◽  
Wellington Antonio Soares ◽  
Raissa Oliveira Marques ◽  
Silvério Ferreira Silva Jr ◽  
Amanda Laureano Raso

Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. This inspection is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI is reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components, such as FMEA (Failure Modes and Effects Analysis) and THERP (Technique for Human Error Rate Prediction). An example by using qualitative and quantitative assessesments with these two techniques to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues, is presented.


Author(s):  
Eugene Babeshko ◽  
Ievgenii Bakhmach ◽  
Vyacheslav Kharchenko ◽  
Eugene Ruchkov ◽  
Oleksandr Siora

Operating reliability assessment of instrumentation and control systems (I&Cs) is always one of the most important activities, especially for critical domains like nuclear power plants (NPPs). Intensive use of relatively new technologies like field programmable gate arrays (FPGAs) in I&C which appear in upgrades and in newly built NPPs makes task to develop and validate advanced operating reliability assessment methods that consider specific technology features very topical. Increased integration densities make the reliability of integrated circuits the most crucial point in modern NPP I&Cs. Moreover, FPGAs differ in some significant ways from other integrated circuits: they are shipped as blanks and are very dependent on design configured into them. Furthermore, FPGA design could be changed during planned NPP outage for different reasons. Considering all possible failure modes of FPGA-based NPP I&C at design stage is a quite challenging task. Therefore, operating reliability assessment is one of the most preferable ways to perform comprehensive analysis of FPGA-based NPP I&Cs. This paper summarizes our experience on operating reliability analysis of FPGA based NPP I&Cs.


2021 ◽  
Vol 2083 (2) ◽  
pp. 022020
Author(s):  
Jiahuan Yu ◽  
Xiaofeng Zhang

Abstract With the development of the nuclear energy industry and the increasing demand for environmental protection, the impact of nuclear power plant radiation on the environment has gradually entered the public view. This article combs the nuclear power plant radiation environmental management systems of several countries, takes the domestic and foreign management of radioactive effluent discharge from nuclear power plants as a starting point, analyses and compares the laws and standards related to radioactive effluents from nuclear power plants in France, the United States, China, and South Korea. In this paper, the management improvement of radioactive effluent discharge system of Chinese nuclear power plants has been discussed.


2021 ◽  
Vol 2021 (11) ◽  
Author(s):  
◽  
Angel Abusleme ◽  
Thomas Adam ◽  
Shakeel Ahmad ◽  
Rizwan Ahmed ◽  
...  

Abstract JUNO is a massive liquid scintillator detector with a primary scientific goal of determining the neutrino mass ordering by studying the oscillated anti-neutrino flux coming from two nuclear power plants at 53 km distance. The expected signal anti-neutrino interaction rate is only 60 counts per day (cpd), therefore a careful control of the background sources due to radioactivity is critical. In particular, natural radioactivity present in all materials and in the environment represents a serious issue that could impair the sensitivity of the experiment if appropriate countermeasures were not foreseen. In this paper we discuss the background reduction strategies undertaken by the JUNO collaboration to reduce at minimum the impact of natural radioactivity. We describe our efforts for an optimized experimental design, a careful material screening and accurate detector production handling, and a constant control of the expected results through a meticulous Monte Carlo simulation program. We show that all these actions should allow us to keep the background count rate safely below the target value of 10 Hz (i.e. ∼1 cpd accidental background) in the default fiducial volume, above an energy threshold of 0.7 MeV.


Sign in / Sign up

Export Citation Format

Share Document