scholarly journals Machine Learning of the Reactor Core Loading Pattern Critical Parameters

2008 ◽  
Vol 2008 ◽  
pp. 1-6
Author(s):  
Krešimir Trontl ◽  
Dubravko Pevec ◽  
Tomislav Šmuc

The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm, and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. In this paper, we investigate the applicability of a machine learning model which could be used for fast loading pattern evaluation. We employ a recently introduced machine learning technique, support vector regression (SVR), which is a data driven, kernel based, nonlinear modeling paradigm, in which model parameters are automatically determined by solving a quadratic optimization problem. The main objective of the work reported in this paper was to evaluate the possibility of applying SVR method for reactor core loading pattern modeling. We illustrate the performance of the solution and discuss its applicability, that is, complexity, speed, and accuracy.

2020 ◽  
Author(s):  
Murad Megjhani ◽  
Kalijah Terilli ◽  
Ayham Alkhachroum ◽  
David J. Roh ◽  
Sachin Agarwal ◽  
...  

AbstractObjectiveTo develop a machine learning based tool, using routine vital signs, to assess delayed cerebral ischemia (DCI) risk over time.MethodsIn this retrospective analysis, physiologic data for 540 consecutive acute subarachnoid hemorrhage patients were collected and annotated as part of a prospective observational cohort study between May 2006 and December 2014. Patients were excluded if (i) no physiologic data was available, (ii) they expired prior to the DCI onset window (< post bleed day 3) or (iii) early angiographic vasospasm was detected on admitting angiogram. DCI was prospectively labeled by consensus of treating physicians. Occurrence of DCI was classified using various machine learning approaches including logistic regression, random forest, support vector machine (linear and kernel), and an ensemble classifier, trained on vitals and subject characteristic features. Hourly risk scores were generated as the posterior probability at time t. We performed five-fold nested cross validation to tune the model parameters and to report the accuracy. All classifiers were evaluated for good discrimination using the area under the receiver operating characteristic curve (AU-ROC) and confusion matrices.ResultsOf 310 patients included in our final analysis, 101 (32.6%) patients developed DCI. We achieved maximal classification of 0.81 [0.75-0.82] AU-ROC. We also predicted 74.7 % of all DCI events 12 hours before typical clinical detection with a ratio of 3 true alerts for every 2 false alerts.ConclusionA data-driven machine learning based detection tool offered hourly assessments of DCI risk and incorporated new physiologic information over time.


2021 ◽  
Vol 21 (8) ◽  
pp. 2379-2405
Author(s):  
Luigi Cesarini ◽  
Rui Figueiredo ◽  
Beatrice Monteleone ◽  
Mario L. V. Martina

Abstract. Weather index insurance is an innovative tool in risk transfer for disasters induced by natural hazards. This paper proposes a methodology that uses machine learning algorithms for the identification of extreme flood and drought events aimed at reducing the basis risk connected to this kind of insurance mechanism. The model types selected for this study were the neural network and the support vector machine, vastly adopted for classification problems, which were built exploring thousands of possible configurations based on the combination of different model parameters. The models were developed and tested in the Dominican Republic context, based on data from multiple sources covering a time period between 2000 and 2019. Using rainfall and soil moisture data, the machine learning algorithms provided a strong improvement when compared to logistic regression models, used as a baseline for both hazards. Furthermore, increasing the amount of information provided during the training of the models proved to be beneficial to the performances, increasing their classification accuracy and confirming the ability of these algorithms to exploit big data and their potential for application within index insurance products.


2021 ◽  
Author(s):  
Luigi Cesarini ◽  
Rui Figueiredo ◽  
Beatrice Monteleone ◽  
Mario Martina

&lt;p&gt;A steady increase in the frequency and severity of extreme climate events has been observed in recent years, causing losses amounting to billions of dollars. Floods and droughts are responsible for almost half of those losses, severely affecting people&amp;#8217;s livelihoods in the form of damaged property, goods and even loss of life. Weather index insurance is an innovative tool in risk transfer for disasters induced by natural hazards. In this type of insurance, payouts are triggered when an index calculated from one or multiple environmental variables exceeds a predefined threshold. Thus, contrary to traditional insurance, it does not require costly and time-consuming post-event loss assessments. Its ease of application makes it an ideal solution for developing countries, where fast payouts in light of a catastrophic event would guarantee the survival of an economic sector, for example, providing the monetary resources necessary for farmers to sustain a prolonged period of extreme temperatures. The main obstacle to a wider application of this type of insurance mechanism stems from the so-called basis risk, which arises when a loss event takes place but a payout is not issued, or vice-versa.&lt;/p&gt;&lt;p&gt;This study proposes and tests the application of machine learning algorithms for the identification of extreme flood and drought events in the context of weather index insurance, with the aim of reducing basis risk. Neural networks and support vector machines, widely adopted for classification problems, are employed exploring thousands of possible configurations based on the combination of different model parameters. The models were developed and tested in the Dominican Republic context, leveraging datasets from multiple sources with low latency, covering a time period between 2000 and 2019. Using rainfall (GSMaP, CMORPH, CHIRPS, CCS, PERSIANN and IMERG) and soil moisture (ERA5) data, the machine learning algorithms provided a strong improvement when compared to logistic regression models, used as a baseline for both hazards. Furthermore, increasing the number of information provided during model training proved to be beneficial to the performances, improving their classification accuracy and confirming the ability of these algorithms to exploit big data. Results highlight the potential of machine learning for application within index insurance products.&lt;/p&gt;


Author(s):  
Suhui Li ◽  
Wenkai Qian ◽  
Haoyang Liu ◽  
Min Zhu ◽  
Christos N. Markides

Abstract For advanced lean premixed gas turbine combustors that have high inlet air temperatures, autoignition may occur during the fuel/air mixing process, which can cause flame-holing inside the premixing device and burn the hardware. An experimental study was performed using a setup that mimics the fuel/air mixing process of lean-premixed combustors. In the present experiment, the preheated air was injected into a quartz tube, and a fuel jet was injected concentrically into the hot turbulent air coflow. The quartz tube allows for direct observation of the autoignition behavior, which develops when the fuel and air mix as they flow inside the tube. This paper presents a study combining machine learning methods and physical analysis that is aimed at predicting autoignition in such flows. A model for the prediction of autoignition of a fuel jet in a flow configuration referred to as a ‘confined turbulent hot coflow’ (CTHC) is developed using machine learning techniques based on binary logistic regression and support vector machine. Key factors that impact the autoignition phenomenon are identified by analyzing the underlying physics and are used to form the feature vector of the model. The model is trained using data from experiments and is validated by an additional set of data, which are selected randomly. The results show that the model predicts the autoignition event with satisfactory accuracy and quick turnaround. The trained model parameters in turn provide insights into the quantitative contribution of different factors that impact the autoignition event. Thus, the machine-learning based method can form an alternative to CFD modeling in some cases.


2019 ◽  
Author(s):  
Md Sultan Mahmud ◽  
Faruk Ahmed ◽  
Rakib Al-Fahad ◽  
Kazi Ashraf Moinuddin ◽  
Mohammed Yeasin ◽  
...  

ABSTRACTSpeech comprehension in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal (when in time) and spatial (where in the brain) characteristics of the cortex’s speech-evoked response that distinguish older adults with or without mild hearing loss. We used source montage to model scalp-recorded during a phoneme discrimination task conducted under clear and noise-degraded conditions. We applied machine learning analyses (stability selection and control) to choose features of the speech-evoked response that are consistent over a range of model parameters and support vector machine (SVM) classification to investigate the time course and brain regions that segregate groups and speech clarity. Whole-brain data analysis revealed a classification accuracy of 82.03% [area under the curve (AUC)=81.18%; F1-score 82.00%], distinguishing groups within ∼50 ms after speech onset (i.e., as early as the P1 wave).We observed lower accuracy of 78.39% [AUC=78.74%; F1-score=79.00%] and delayed classification performance when the speech token were embedded in noise, with group segregation at 60 ms. Separate analysis using left (LH) and right hemisphere (RH) regions showed that LH speech activity was better at distinguishing hearing groups than activity measured over the RH. Moreover, stability selection analysis identified 13 brain regions (among 1428 total spatiotemporal features from 68 regions) where source activity segregated groups with >80% accuracy (clear speech); whereas 15 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (76% accuracy). Our results identify two core neural networks associated with complex speech perception in older adults and confirm a larger number of neural regions, particularly in RH and frontal lobe, are active when processing degraded speech information.


Author(s):  
Bingyan Jia ◽  
Danlin Hou ◽  
Liangzhu (Leon) Wang ◽  
Ibrahim Galal Hassan

Abstract Building energy models (BEM) are developed for understanding a building’s energy performance. A meta-model of the whole building energy analysis is often used for the BEM calibration and energy prediction. The literature review shows that studies with a focus on the development of room-level meta-models are missing. This study aims to address this research gap through a case study of a residential building with 138 apartments in Doha, Qatar. Five parameters, including cooling setpoint, number of occupants, lighting power density, equipment power density, and interior solar reflectance, are selected as input parameters to create ninety-six different scenarios. Three machine-learning models are used as meta-models to generalize the relationship between cooling energy and the model parameters, including Multiple Linear Regression, Support Vector Regression, and Artificial Neural Networks. The three meta-models’ prediction accuracies are evaluated by the Normalized Mean Bias Error (NMBE), Coefficient of Variation of the Root Mean Squared Error CV (RMSE), and R square (R2). The results show that the ANN model performs best. A new generic BEM is then established to validate the meta-model. The results indicate that the proposed meta-model is accurate and efficient in predicting the cooling energy in summer and transitional months for a building with a similar floor configuration.


2021 ◽  
Author(s):  
Gilbert Hinge ◽  
Ashutosh Sharma

&lt;p&gt;Droughts are considered as one of the most catastrophic natural disasters that affect humans and their surroundings at a larger spatial scale compared to other disasters. Rajasthan, one of India's semiarid states, is drought inclined and has experienced many drought events in the past. In this study, we evaluated different preprocessing and Machine Learning (ML) approaches for drought predictions in Rajasthan for a lead-time of up to 6 months. The Standardized Precipitation Index (SPI) was used as the drought quantifying measure to identify the drought events. SPI was calculated for 3, 6, and 12-month timescales over the last 115-year using monthly rainfall data at 119 grid stations. &amp;#160;ML techniques, namely Artificial Neural Network (ANN), Support Vector Regression (SVR), and Linear Regression (LR), were used to evaluate their accuracy in drought forecasting over different lead times. Furthermore, two data processing methods, namely the Wavelet Packet Transform (WPT) and Discrete Wavelet Transform (DWT), have also been used to enhance the aforementioned ML models' predictability. At the outset, the preprocessed SPI data from both the methods were used as inputs for LR, SVR, and ANN to form a hybrid model. The hybrid models' drought predictability for a different lead-time was evaluated and compared with the standalone ML models. The forecasting performance of all the models for all 119 grid points was assessed with three statistical indices: Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Nash-Sutcliffe Efficiency (NSE). RMSE was used to select the optimal model parameters, such as the number of hidden neurons and the number of inputs in ANN, and the level of decomposition and mother wavelet in wavelet analysis. &amp;#160;Based on these measures, the coupled model showed better forecasting performance than the standalone ML models. The coupled WPT-ANN model shows superior predictability for most of the grid points than other coupled models and standalone models. &amp;#160;All models' performance improved as the timescale increased from 3 to 12 months for all the lead times. However, the model performance decreased as the lead time increased.&amp;#160; These findings indicate the necessity of processing the data before the application of any machine learning technique. The hybrid model's prediction performance also shows that it can be used for drought early warning systems in the state.&lt;/p&gt;


Author(s):  
Biruk A. Gebre ◽  
Kishore Pochiraju

Ball-driven mobility platforms have shown that spherical wheels can enable substantial freedom of mobility for ground vehicles. Accurate and robust actuation of spherical wheels for high acceleration maneuvers and graded terrains can, however, be challenging. In this paper, a novel design for a magnetically coupled ball drive is presented. The proposed design utilizes an internal support structure and magnetic coupling to eliminate the need for an external claw-like support structure. A model of the proposed design is developed and used to evaluate the slip/no-slip operational window. Due to the high-dimensional nature of the model, the design space is sampled using randomly generated design instances and the data is used to train a support vector classification machine. Principal component analysis and feature importance detection are used to identify critical parameters that control the slip behavior and the feasible (no-slip) design space. The classification shows an increase in the feasible design space with the addition of, and increase in, the magnetic coupling force. Based on the results of the machine learning algorithm, FEA design tools and experimental testing are used to design a spherical magnetic coupler array configuration that can realize the desired magnetic coupling force for the ball drive.


2021 ◽  
Author(s):  
Shuaizhou Hu ◽  
Xinyao Zhang ◽  
Hao-yu Liao ◽  
Xiao Liang ◽  
Minghui Zheng ◽  
...  

Abstract Remanufacturing sites often receive products with different brands, models, conditions, and quality levels. Proper sorting and classification of the waste stream is a primary step in efficiently recovering and handling used products. The correct classification is particularly crucial in future electronic waste (e-waste) management sites equipped with Artificial Intelligence (AI) and robotic technologies. Robots should be enabled with proper algorithms to recognize and classify products with different features and prepare them for assembly and disassembly tasks. In this study, two categories of Machine Learning (ML) and Deep Learning (DL) techniques are used to classify consumer electronics. ML models include Naïve Bayes with Bernoulli, Gaussian, Multinomial distributions, and Support Vector Machine (SVM) algorithms with four kernels of Linear, Radial Basis Function (RBF), Polynomial, and Sigmoid. While DL models include VGG-16, GoogLeNet, Inception-v3, Inception-v4, and ResNet-50. The above-mentioned models are used to classify three laptop brands, including Apple, HP, and ThinkPad. First the Edge Histogram Descriptor (EHD) and Scale Invariant Feature Transform (SIFT) are used to extract features as inputs to ML models for classification. DL models use laptop images without pre-processing on feature extraction. The trained models are slightly overfitting due to the limited dataset and complexity of model parameters. Despite slight overfitting, the models can identify each brand. The findings prove that DL models outperform them of ML. Among DL models, GoogLeNet has the highest performance in identifying the laptop brands.


2020 ◽  
Author(s):  
Luigi Cesarini ◽  
Rui Figueiredo ◽  
Beatrice Monteleone ◽  
Mario Lloyd Virgilio Martina

Abstract. Weather index insurance is an innovative tool in risk transfer for disasters induced by natural hazards. This paper proposes a methodology that uses machine learning algorithms for the identification of extreme flood and drought events aimed at reducing the basis risk connected to this kind of insurance mechanism. The model types selected for this study were the neural network and the support vector machine, vastly adopted for classification problems, which were built exploring thousands of possible configurations based on the combination of different model parameters. The models were developed and tested in the Dominican Republic context, based on data from multiple sources covering a time period between 2000 and 2019. Using rainfall and soil moisture data, the machine learning algorithms provided a strong improvement when compared to logistic regression models, used as a baseline for both hazards. Furthermore, increasing the number of information provided during the training of the models proved to be beneficial to the performances, increasing their classification accuracy and confirming the ability of these algorithms to exploit big data and their potential for application within index insurance products.


Sign in / Sign up

Export Citation Format

Share Document