scholarly journals Machine-Learning Methods in Prognosis of Ageing Phenomena in Nuclear Power Plant Components

2021 ◽  
pp. 11-21
Author(s):  
Miki Sirola ◽  
John Einar Hulsund

In the Long-Term Degradation Management (LTDM) project we approach component ageing problems with data-analysis methods. It includes literature review about related work. We have used several data sources: water chemistry data from the Halden reactor, simulator data from the HAMBO simulator, and data from a local coffee machine instrumented with sensors. K-means clustering is used in cluster analysis of nuclear power plant data. A method for detecting trends in selected clusters is developed. Prognosis models are developed and tested. In our analysis ARIMA models and gamma processes are used. Such tasks as classification and time-series prediction are focused on. Methodologies are tested in experiments. The realization of practical applications is made with the Jupyter Notebook programming tool and Python 3 programming language. Failure rates and drifts from normal operating states can be the first symptoms of an approaching fault. The problem is to find data sources with enough transients and events to create prognostic models. Prognosis models for predicting possible developing ageing features in nuclear power plant data utilizing machine learning methods or closely related methods are demonstrated.


Author(s):  
A. Petrov ◽  
◽  
K. Danilovskiy ◽  
K. Sukhorukova ◽  
A. Leonenko ◽  
...  


Author(s):  
Ronald Boring ◽  
Thomas Ulrich ◽  
Roger Lew ◽  
Martin Rasmussen Skogstad

The authors have recently developed a microworld, a simplified process control simulator, to simulate a nuclear power plant. The microworld provides an environment that can be readily manipulated to gather data using a range of participants, from students to fully qualified operators. Because the microworld represents a simplified domain, it is possible to have more precise experimental control compared with the complex and confounding environment afforded by a full-scope simulator. In this paper, we discuss collecting human reliability data from a microworld. We review the generalizability of human error data from the microworld compared to other data sources like full-scope simulator studies and compare advantages and disadvantages of microworld simulator studies to support human reliability data collection needs.



2017 ◽  
Author(s):  
Donald D. Lucas ◽  
Matthew D. Simpson ◽  
Philip Cameron-Smith ◽  
Ronald L. Baskett

Abstract. Probability distribution functions (PDFs) of model inputs that affect the transport and dispersion of a trace gas released from a coastal California nuclear power plant are quantified using ensemble simulations, machine learning algorithms, and Bayesian inversion. The PDFs are constrained by observations of tracer concentrations and account for uncertainty in meteorology, transport, diffusion, and emissions. Meteorological uncertainty is calculated using an ensemble of simulations of the Weather Research and Forecasting (WRF) model that samples five categories of model inputs (initialization time, boundary layer physics, land surface model, nudging options, and reanalysis data). The WRF output is used to drive tens of thousands of FLEXPART dispersion simulations that sample a uniform distribution of six emissions inputs. Machine learning algorithms are trained on the ensemble data, and used to quantify the sources of ensemble variability and to infer, via inverse modeling, the values of the 11 model inputs most consistent with tracer measurements. We find a substantial ensemble spread in tracer concentrations (factors of 10 to 103), most of which is due to changing emissions inputs (about 80 %), though the cumulative effects of meteorological variations are not negligible. The performance of the inverse method is verified using synthetic observations generated from arbitrarily selected simulations. When applied to measurements from a controlled tracer release experiment, the most likely inversion results are within about 200 meters of the known release location, 5 and 50 minutes of the release start and duration times, respectively, and 22 % of the release amount. The inversion also estimates probabilities of different combinations of WRF inputs of matching the tracer observations.



2014 ◽  
Vol 61 ◽  
pp. 377-380 ◽  
Author(s):  
Peng Tan ◽  
Ji Xia ◽  
Cheng Zhang ◽  
Qingyan Fang ◽  
Gang Chen


Nafta-Gaz ◽  
2019 ◽  
Vol 75 (2) ◽  
pp. 111-117
Author(s):  
Andrzej Paliński ◽  

The paper presents contemporary trends in artificial intelligence and machine learning methods, which include, among others, artificial neural networks, decision trees, fuzzy logic systems and others. Computational intelligence methods are part of the field of research on artificial intelligence. Selected methods of computational intelligence were used to build medium-term monthly forecasts of natural gas demand for Poland. The accuracy of forecasts obtained using the artificial neural network and the decision tree with classical linear regression was compared based on historical data from a ten-year period. The explanatory variables were: gas consumption in other EU countries, average monthly temperature, industrial production, wages in the economy and the price of natural gas. Forecasting was carried out in five stages differing in the selection of the learning and testing sample, the use of data preprocessing and the elimination of some variables. For raw data and a random training set, the highest accuracy was achieved by linear regression. For the preprocessed data and the random learning set, the decision tree was the most accurate. The forecast obtained on the basis of the first eight years and tested on the last two was most accurately created by regression, but only slightly better than with the decision tree or neural network, regardless of data normalization and elimination of collinear variables. Machine learning methods showed good accuracy of monthly gas consumption forecasts, but nevertheless slightly gave way to classical linear regression, due to too narrow set of explanatory variables. Machine learning methods will be able to show higher effectiveness as the number of data increases and the set of potential explanatory variables is expanded. In the sea of data, machine learning methods are able to create prognostic models more effectively, without the analyst’s laborious involvement in data preparation and multi-stage analysis. They will also allow for the frequent updating of the form of prognostic models even after each addition of new data into the database.



2021 ◽  
Vol 247 ◽  
pp. 02039
Author(s):  
LI Zeguang ◽  
Jun Sun ◽  
Chunlin Wei ◽  
Zhe Sui ◽  
Xiaoye Qian

With the increasing needs of accurate simulation, the 3-D diffusion reactor physics module has been implemented in HTGR’s engineering simulator to give better neutron dynamics results instead of point kinetics model used in previous nuclear power plant simulators. As the requirement of real-time calculation of nuclear power plant simulator, the cross-sections used in 3-D diffusion module must be calculated very efficiently. Normally, each cross-section in simulator is calculated in the form of polynomial by function of several concerned variables, the expression of which was finalized by multivariate regression from large number scattered database generated by previous calculation. Since the polynomial is explicit and prepared in advance, the cross-sections could be calculated quickly enough in running simulator and achieve acceptable accuracy especially in LWR simulations. However, some of concerned variables in HTGR are in large scope and also the relationships of these variables are non-linear and very complex, it is very hard to use polynomial to meet full range accuracy. In this paper, a cross-section generating method used in HTGR simulator is proposed, which is based on machine learning methods, especially deep neuron network and tree regression methods. This method first uses deep neuron networks to consider the nonlinear relationships between different variables and then uses a tree regression to achieve accurate cross-section results in full range, the parameters of deep neuron networks and tree regression are learned automatically from the scattered database generated by VSOP. With the numerical tests, the proposed cross-section generating method could get more accurate cross-section results and the calculation time is acceptable by the simulator.





2015 ◽  
Vol 137 (6) ◽  
Author(s):  
Annalisa Perasso ◽  
Cristina Campi ◽  
Cristian Toraci ◽  
Francesco Benvenuto ◽  
Michele Piana ◽  
...  

This paper describes a classification method for automatic fault detection in nuclear power plant (NPP) data. The method takes as input time series associated to specific parameters and realizes signal classification by using a clustering algorithm based on possibilistic C-means (PCM). This approach is applied to time series recorded in a CANDU® power plant and is validated by comparison with results provided by a classification method based on principal component analysis (PCA).



Sign in / Sign up

Export Citation Format

Share Document