scholarly journals Machine Learning-Based Classification and Regression Approach for Sustainable Disaster Management: The Case Study of APR1400 in Korea

2021 ◽  
Vol 13 (17) ◽  
pp. 9712
Author(s):  
Ahmed Abd El-Hameed ◽  
Juyoul Kim

During nuclear accidents, decision-makers need to handle considerable data to take appropriate protective actions to protect people and the environment from radioactive material release. In such scenarios, machine learning can be an essential tool in facilitating the protection action decisions that will be made by decision-makers. By feeding machines software with big data to analyze and identify nuclear accident behavior, types, and the concentrations of released radioactive materials can be predicted, thus helping in early warning and protecting people and the environment. In this study, based on the ground deposition concentration of radioactive materials at different distances offsite in an emergency planning zone (EPZ), we proposed classification and regression models for three severe accidents. The objective of the classification model is to recognize the transient situation type for taking appropriate actions, while the objective of the regression model is to estimate the concentrations of the released radioactive materials. We used the Personal Computer Transient Analyser (PCTRAN) Advanced Power Reactor (APR) 1400 to simulate three severe accident scenarios and to generate a source term released to the environment. Additionally, the Radiological Consequence Analysis Program (RCAP) was used to assess the off-site consequences of nuclear power plant accidents and to estimate the ground deposition concentrations of radionuclides. Moreover, ground deposition concentrations at different distances were used as input data for the classification and regression tree (CART) models to obtain an accident pattern and to establish a prediction model. Results showed that the ground deposition concentration at a near distance from a nuclear power plant is a more informative parameter in predicting the concentration of radioactive material release, while the ground deposition concentration at a far distance is a very informative parameter in identifying accident types. In the regression model, the R-square of the training and test data was 0.995 and 0.994, respectively, showing a mean strong linear relationship between the predicted and actual concentration of radioactive material release. The mean absolute percentage error was found to be 26.9% and 28.1% for the training and test data, respectively. In the classification model, the model predicted a scenario (1) of 99.8% and 98.9%, scenario (2) of 98.4% and 91.6%, and scenario (3) of 98.6% and 94.7% for the training and test data, respectively.

2018 ◽  
pp. 66-70
Author(s):  
O. V. Taran ◽  
O. G. Sandul

The nuclear energy use progressively becomes part of the life of every modern person, who more and more faces radioactive materials in medical institutions, in industry. Half of all electricity generated in Ukraine is generated by nuclear power plants. The peculiarities of the nuclear energy use generate appropriate rules for people dealing with radioactive materials. The article analyzes the standards of the Criminal Code of Ukraine, which provides for liability for acts related to the illegal handling of radioactive materials, for violation of the nuclear and radiation safety rules, violation of radiation safety requirements, the threat of theft of radioactive materials, the illicit manufacturing of a nuclear explosive device, abduction or capture of radioactive materials, attack on radioactive materials transportation means. The grounds and peculiarities for bringing to criminal liability have been reviewed, the range of persons who can be prosecuted has been defined. Conditions and grounds for exemption from criminal liability in the absence of a person's criminal intent to use radioactive material are considered. It has been demonstrated that the Criminal Code of Ukraine, by prohibiting certain actions on the illegal radioactive materials handling, ensures protection of the most important social relations and social benefits.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. e14069-e14069
Author(s):  
Oguz Akbilgic ◽  
Ibrahim Karabayir ◽  
Hakan Gunturkun ◽  
Joseph F Pierre ◽  
Ashley C Rashe ◽  
...  

e14069 Background: There is growing interest in the links between cancer and the gut microbiome. However, the effect of chemotherapy upon the gut microbiome remains unknown. We studied whether machine learning can: 1) accurately classify subjects with cancer vs healthy controls and 2) whether this classification model is affected by chemotherapy exposure status. Methods: We used the American Gut Project data to build a extreme gradient boosting (XGBoost) model to distinguish between subjects with cancer vs healthy controls using data on simple demographics and published microbiome. We then further explore the selected features for cancer subjects based on chemotherapy exposure. Results: The cohort included 7,685 subjects consisting of 561 subjects with cancer, 52.5% female, 87.3% White, and average age of 44.7 (SD 17.7). The binary outcome variable represents cancer status. Among 561 subjects with cancer, 94 of them were treated with chemotherapy agents before sampling of microbiomes. As predictors, there were four demographic variables (sex, race, age, BMI) and 1,812 operational taxonomic units (OTUs) each found in at least 2 subjects via RNA sequencing. We randomly split data into 80% training and 20% hidden test. We then built an XGBoost model with 5-fold cross-validation using only training data yielding an AUC (with 95% CI) of 0.79 (0.77, 0.80) and obtained the almost the same AUC on the hidden test data. Based on feature importance analysis, we identified 12 most important features (Age, BMI and 12 OTUs; 4C0d-2, Brachyspirae, Methanosphaera, Geodermatophilaceae, Bifidobacteriaceae, Slackia, Staphylococcus, Acidaminoccus, Devosia, Proteus) and rebuilt a model using only these features and obtained AUC of 0.80 (0.77, 0.83) on the hidden test data. The average predicted probabilities for controls, cancer patients who were exposed to chemotherapy, and cancer patients who were not were 0.071 (0.070,0.073), 0.125 (0.110, 0.140), 0.156 (0.148, 0.164), respectively. There was no statistically significant difference on levels of these 12 OTUs between cancer subjects treated with and without chemotherapy. Conclusions: Machine learning achieved a moderately high accuracy identifying patients’ cancer status based on microbiome. Despite the literature on microbiome and chemotherapy interaction, the levels of 12 OTUs used in our model were not significantly different for cancer patients with or without chemotherapy exposure. Testing this model on other large population databases is needed for broader validation.


2016 ◽  
Vol 6 (4) ◽  
pp. 40-48
Author(s):  
Kim Long Pham ◽  
Hao Quang Nguyen ◽  
Duy Hien Pham ◽  
Xuan Anh Do ◽  
Duc Thang Duong ◽  
...  

FLEXPART is a Lagrangian transport and dispersion model suitable for the simulation of a large range of atmospheric transport processes. FLEXPART has been researched and applied   in simulation of the long-range dispersion of radioactive materials. It can be applicable to the problem of radioactive materials released from the nuclear power plants impact on Vietnam. This report presents simulation of radioactive dispersion from the accident assumed Fangchenggang and Changjiang nuclear power plants in China with the FLEXPART, using meteorological data from the National Centers for Environmental Prediction (NCEP). The results of simulations and analyzing showed good applicability of FLEXPART for a long-range radioactive materials dispersion. The preliminary simulation results show that the impact of the radioactive material dispersion in Vietnam varies by the well-known characteristics of the monsoon of our country. Winter is the time when the dominant northeast winds up radioactive dispersion most towards our country, its sphere of influence extends from the Northeast (Quang Ninh) to North Central (Da Nang).


2021 ◽  
Vol 25 (5) ◽  
pp. 1291-1322
Author(s):  
Sandeep Kumar Singla ◽  
Rahul Dev Garg ◽  
Om Prakash Dubey

Recent technological enhancements in the field of information technology and statistical techniques allowed the sophisticated and reliable analysis based on machine learning methods. A number of machine learning data analytical tools may be exploited for the classification and regression problems. These tools and techniques can be effectively used for the highly data-intensive operations such as agricultural and meteorological applications, bioinformatics and stock market analysis based on the daily prices of the market. Machine learning ensemble methods such as Decision Tree (C5.0), Classification and Regression (CART), Gradient Boosting Machine (GBM) and Random Forest (RF) has been investigated in the proposed work. The proposed work demonstrates that temporal variations in the spectral data and computational efficiency of machine learning methods may be effectively used for the discrimination of types of sugarcane. The discrimination has been considered as a binary classification problem to segregate ratoon from plantation sugarcane. Variable importance selection based on Mean Decrease in Accuracy (MDA) and Mean Decrease in Gini (MDG) have been used to create the appropriate dataset for the classification. The performance of the binary classification model based on RF is the best in all the possible combination of input images. Feature selection based on MDA and MDG measures of RF is also important for the dimensionality reduction. It has been observed that RF model performed best with 97% accuracy, whereas the performance of GBM method is the lowest. Binary classification based on the remotely sensed data can be effectively handled using random forest method.


1990 ◽  
Vol 22 (5) ◽  
pp. 203-210 ◽  
Author(s):  
D. Rank ◽  
F. J. Maringer ◽  
W. Papesch ◽  
V. Rajner

Water, sediment, and fish samples were collected during the Danube excursion 1988, within a coordinated sampling program of the Radiology Working Group of the “Internationale Arbeitsgemeinschaft Donauforschung ” (K.Hübel, Munich; I. Kurcz, Budapest; D.Rank, Vienna). The H-3 content of the river water and the radioactivity of the bottom sediments were measured at the BVFA Arsenal, Vienna. The determined H-3 content of the Danube water corresponds with the long-term trend in the H-3 content of the hydrosphere; the values lie in the range of 3 Bq/kg downstream from Belgrade, upstream from Belgrade they are about 4 Bq/kg. It was only in the waste water plume of the nuclear power station of Kozloduj that a slightly elevated H-3 value - 6 Bq/kg - was determined. The content of the sediments of artificial radionuclides was found, at the time of the Danube field excursion, to be almost exclusively due to the radioactive material released following the reactor accident at Chernobyl in April 1986 (mainly Cs-137 and Cs-134). As a consequence of the air currents and precipitation conditions prevailing at the time of the accident, the bottom sediments in the lower course of the Danube were less contaminated than those in the upper course. The fine sediments were found to contain over 3000 Bq/kg of Cs-137 in the upper course of the Danube.


Energies ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 1809
Author(s):  
Mohammed El Amine Senoussaoui ◽  
Mostefa Brahami ◽  
Issouf Fofana

Machine learning is widely used as a panacea in many engineering applications including the condition assessment of power transformers. Most statistics attribute the main cause of transformer failure to insulation degradation. Thus, a new, simple, and effective machine-learning approach was proposed to monitor the condition of transformer oils based on some aging indicators. The proposed approach was used to compare the performance of two machine-learning classifiers: J48 decision tree and random forest. The service-aged transformer oils were classified into four groups: the oils that can be maintained in service, the oils that should be reconditioned or filtered, the oils that should be reclaimed, and the oils that must be discarded. From the two algorithms, random forest exhibited a better performance and high accuracy with only a small amount of data. Good performance was achieved through not only the application of the proposed algorithm but also the approach of data preprocessing. Before feeding the classification model, the available data were transformed using the simple k-means method. Subsequently, the obtained data were filtered through correlation-based feature selection (CFsSubset). The resulting features were again retransformed by conducting the principal component analysis and were passed through the CFsSubset filter. The transformation and filtration of the data improved the classification performance of the adopted algorithms, especially random forest. Another advantage of the proposed method is the decrease in the number of the datasets required for the condition assessment of transformer oils, which is valuable for transformer condition monitoring.


Algorithms ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 187
Author(s):  
Aaron Barbosa ◽  
Elijah Pelofske ◽  
Georg Hahn ◽  
Hristo N. Djidjev

Quantum annealers, such as the device built by D-Wave Systems, Inc., offer a way to compute solutions of NP-hard problems that can be expressed in Ising or quadratic unconstrained binary optimization (QUBO) form. Although such solutions are typically of very high quality, problem instances are usually not solved to optimality due to imperfections of the current generations quantum annealers. In this contribution, we aim to understand some of the factors contributing to the hardness of a problem instance, and to use machine learning models to predict the accuracy of the D-Wave 2000Q annealer for solving specific problems. We focus on the maximum clique problem, a classic NP-hard problem with important applications in network analysis, bioinformatics, and computational chemistry. By training a machine learning classification model on basic problem characteristics such as the number of edges in the graph, or annealing parameters, such as the D-Wave’s chain strength, we are able to rank certain features in the order of their contribution to the solution hardness, and present a simple decision tree which allows to predict whether a problem will be solvable to optimality with the D-Wave 2000Q. We extend these results by training a machine learning regression model that predicts the clique size found by D-Wave.


Sign in / Sign up

Export Citation Format

Share Document