scholarly journals Estimating the Distributed Generation Unit Sizing and Its Effects on the Distribution System by Using Machine Learning Methods

Author(s):  
Mikail Purlu ◽  
Belgin Emre Turkay

Many approaches about the planning and operation of power systems, such as network reconfiguration and distributed generation (DG), have been proposed to overcome the challenges caused by the increase in electricity consumption. Besides the positive effects on the grid, contributions on environmental pollution and other advantages, the rapid developments in renewable energy technologies have made the DG resources an important issue, however, improper DG allocation may result in network damages. A lot of studies have been practised with analytical and heuristic methods based on load flow for optimal DG integration to the network. This novel method based on estimation is proposed to determine the size of DG and its effects on the network to get rid of the coercive and time-consuming load flow techniques. Machine learning algorithms, such as Linear Regression, Artificial Neural Network, Support Vector Regression, K-Nearest Neighbor, and Decision Tree, have been used for the estimations and have been applied to well-known test systems, such as IEEE 12-bus, 33-bus, and 69-bus distribution systems. The accuracy of the proposed estimation methods has been verified with R-squared and mean absolute percentage error. Results show that the proposed DG allocation method is effective, applicable, and flexible.

2019 ◽  
Vol 11 (22) ◽  
pp. 2605 ◽  
Author(s):  
Wang ◽  
Chen ◽  
Wang ◽  
Li

Salt-affected soil is a prominent ecological and environmental problem in dry farming areas throughout the world. China has nearly 9.9 million km2 of salt-affected land. The identification, monitoring, and utilization of soil salinization have become important research topics for promoting sustainable progress. In this paper, using field-measured spectral data and soil salinity parameter data, through analysis and transformation of spectral data, five machine learning models, namely, random forest regression (RFR), support vector regression (SVR), gradient-boosted regression tree (GBRT), multilayer perceptron regression (MLPR), and least angle regression (Lars) are compared. The following performance measures of each model were evaluated: the collinear problems, handling data noise, stability, and the accuracy. In terms of these four aspects, the performance of each model on estimating soil salinity is evaluated. The results demonstrate that among the five models, RFR has the best performance in dealing with collinearity, RFR and MLPR have the best performance in dealing with data noise, and the SVR model is the most stable. The Lars model has the highest accuracy, with a determination coefficient (R2) of 0.87, ratio of performance to deviation (RPD) of 2.67, root mean square error (RMSE) of 0.18, and mean absolute percentage error (MAPE) of 0.11. Then, the comprehensive comparison and analysis of the five models are carried out, and it is found that the comprehensive performance of RFR model is the best; hence, this method is most suitable for estimating soil salinity using hyperspectral data. This study can provide a reference for the selection of regression methods in subsequent studies on estimating soil salinity using hyperspectral data.


Materials ◽  
2019 ◽  
Vol 12 (9) ◽  
pp. 1475 ◽  
Author(s):  
Safwan Altarazi ◽  
Rula Allaf ◽  
Firas Alhindawi

In this study, machine learning algorithms (MLA) were employed to predict and classify the tensile strength of polymeric films of different compositions as a function of processing conditions. Two film production techniques were investigated, namely compression molding and extrusion-blow molding. Multi-factor experiments were designed with corresponding parameters. A tensile test was conducted on samples and the tensile strength was recorded. Predictive and classification models from nine MLA were developed. Performance analysis demonstrated the superior predictive ability of the support vector machine (SVM) algorithm, in which a coefficient of determination and mean absolute percentage error of 96% and 4%, respectively were obtained for the extrusion-blow molded films. The classification performance of the MLA was also evaluated, with several algorithms exhibiting excellent performance.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0252436
Author(s):  
Peyman Razmi ◽  
Mahdi Ghaemi Asl ◽  
Giorgio Canarella ◽  
Afsaneh Sadat Emami

This paper contributes to the literature on topology identification (TI) in distribution networks and, in particular, on change detection in switching devices’ status. The lack of measurements in distribution networks compared to transmission networks is a notable challenge. In this paper, we propose an approach to topology identification (TI) of distribution systems based on supervised machine learning (SML) algorithms. This methodology is capable of analyzing the feeder’s voltage profile without requiring the utilization of sensors or any other extraneous measurement device. We show that machine learning algorithms can track the voltage profile’s behavior in each feeder, detect the status of switching devices, identify the distribution system’s typologies, reveal the kind of loads connected or disconnected in the system, and estimate their values. Results are demonstrated under the implementation of the ANSI case study.


Geotechnics ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 534-557
Author(s):  
Sivapalan Gajan

The objective of this study is to develop data-driven predictive models for seismic energy dissipation of rocking shallow foundations during earthquake loading using multiple machine learning (ML) algorithms and experimental data from a rocking foundations database. Three nonlinear, nonparametric ML algorithms are considered: k-nearest neighbors regression (KNN), support vector regression (SVR) and decision tree regression (DTR). The input features to ML algorithms include critical contact area ratio, slenderness ratio and rocking coefficient of rocking system, and peak ground acceleration and Arias intensity of earthquake motion. A randomly split pair of training and testing datasets is used for initial evaluation of the models and hyperparameter tuning. Repeated k-fold cross validation technique is used to further evaluate the performance of ML models in terms of bias and variance using mean absolute percentage error. It is found that all three ML models perform better than multivariate linear regression model, and that both KNN and SVR models consistently outperform DTR model. On average, the accuracy of KNN model is about 16% higher than that of SVR model, while the variance of SVR model is about 27% smaller than that of KNN model, making them both excellent candidates for modeling the problem considered.


2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Author(s):  
Anantvir Singh Romana

Accurate diagnostic detection of the disease in a patient is critical and may alter the subsequent treatment and increase the chances of survival rate. Machine learning techniques have been instrumental in disease detection and are currently being used in various classification problems due to their accurate prediction performance. Various techniques may provide different desired accuracies and it is therefore imperative to use the most suitable method which provides the best desired results. This research seeks to provide comparative analysis of Support Vector Machine, Naïve bayes, J48 Decision Tree and neural network classifiers breast cancer and diabetes datsets.


2021 ◽  
Vol 186 (Supplement_1) ◽  
pp. 445-451
Author(s):  
Yifei Sun ◽  
Navid Rashedi ◽  
Vikrant Vaze ◽  
Parikshit Shah ◽  
Ryan Halter ◽  
...  

ABSTRACT Introduction Early prediction of the acute hypotensive episode (AHE) in critically ill patients has the potential to improve outcomes. In this study, we apply different machine learning algorithms to the MIMIC III Physionet dataset, containing more than 60,000 real-world intensive care unit records, to test commonly used machine learning technologies and compare their performances. Materials and Methods Five classification methods including K-nearest neighbor, logistic regression, support vector machine, random forest, and a deep learning method called long short-term memory are applied to predict an AHE 30 minutes in advance. An analysis comparing model performance when including versus excluding invasive features was conducted. To further study the pattern of the underlying mean arterial pressure (MAP), we apply a regression method to predict the continuous MAP values using linear regression over the next 60 minutes. Results Support vector machine yields the best performance in terms of recall (84%). Including the invasive features in the classification improves the performance significantly with both recall and precision increasing by more than 20 percentage points. We were able to predict the MAP with a root mean square error (a frequently used measure of the differences between the predicted values and the observed values) of 10 mmHg 60 minutes in the future. After converting continuous MAP predictions into AHE binary predictions, we achieve a 91% recall and 68% precision. In addition to predicting AHE, the MAP predictions provide clinically useful information regarding the timing and severity of the AHE occurrence. Conclusion We were able to predict AHE with precision and recall above 80% 30 minutes in advance with the large real-world dataset. The prediction of regression model can provide a more fine-grained, interpretable signal to practitioners. Model performance is improved by the inclusion of invasive features in predicting AHE, when compared to predicting the AHE based on only the available, restricted set of noninvasive technologies. This demonstrates the importance of exploring more noninvasive technologies for AHE prediction.


2021 ◽  
pp. 1-17
Author(s):  
Ahmed Al-Tarawneh ◽  
Ja’afer Al-Saraireh

Twitter is one of the most popular platforms used to share and post ideas. Hackers and anonymous attackers use these platforms maliciously, and their behavior can be used to predict the risk of future attacks, by gathering and classifying hackers’ tweets using machine-learning techniques. Previous approaches for detecting infected tweets are based on human efforts or text analysis, thus they are limited to capturing the hidden text between tweet lines. The main aim of this research paper is to enhance the efficiency of hacker detection for the Twitter platform using the complex networks technique with adapted machine learning algorithms. This work presents a methodology that collects a list of users with their followers who are sharing their posts that have similar interests from a hackers’ community on Twitter. The list is built based on a set of suggested keywords that are the commonly used terms by hackers in their tweets. After that, a complex network is generated for all users to find relations among them in terms of network centrality, closeness, and betweenness. After extracting these values, a dataset of the most influential users in the hacker community is assembled. Subsequently, tweets belonging to users in the extracted dataset are gathered and classified into positive and negative classes. The output of this process is utilized with a machine learning process by applying different algorithms. This research build and investigate an accurate dataset containing real users who belong to a hackers’ community. Correctly, classified instances were measured for accuracy using the average values of K-nearest neighbor, Naive Bayes, Random Tree, and the support vector machine techniques, demonstrating about 90% and 88% accuracy for cross-validation and percentage split respectively. Consequently, the proposed network cyber Twitter model is able to detect hackers, and determine if tweets pose a risk to future institutions and individuals to provide early warning of possible attacks.


Author(s):  
Anik Das ◽  
Mohamed M. Ahmed

Accurate lane-change prediction information in real time is essential to safely operate Autonomous Vehicles (AVs) on the roadways, especially at the early stage of AVs deployment, where there will be an interaction between AVs and human-driven vehicles. This study proposed reliable lane-change prediction models considering features from vehicle kinematics, machine vision, driver, and roadway geometric characteristics using the trajectory-level SHRP2 Naturalistic Driving Study and Roadway Information Database. Several machine learning algorithms were trained, validated, tested, and comparatively analyzed including, Classification And Regression Trees (CART), Random Forest (RF), eXtreme Gradient Boosting (XGBoost), Adaptive Boosting (AdaBoost), Support Vector Machine (SVM), K Nearest Neighbor (KNN), and Naïve Bayes (NB) based on six different sets of features. In each feature set, relevant features were extracted through a wrapper-based algorithm named Boruta. The results showed that the XGBoost model outperformed all other models in relation to its highest overall prediction accuracy (97%) and F1-score (95.5%) considering all features. However, the highest overall prediction accuracy of 97.3% and F1-score of 95.9% were observed in the XGBoost model based on vehicle kinematics features. Moreover, it was found that XGBoost was the only model that achieved a reliable and balanced prediction performance across all six feature sets. Furthermore, a simplified XGBoost model was developed for each feature set considering the practical implementation of the model. The proposed prediction model could help in trajectory planning for AVs and could be used to develop more reliable advanced driver assistance systems (ADAS) in a cooperative connected and automated vehicle environment.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4655
Author(s):  
Dariusz Czerwinski ◽  
Jakub Gęca ◽  
Krzysztof Kolano

In this article, the authors propose two models for BLDC motor winding temperature estimation using machine learning methods. For the purposes of the research, measurements were made for over 160 h of motor operation, and then, they were preprocessed. The algorithms of linear regression, ElasticNet, stochastic gradient descent regressor, support vector machines, decision trees, and AdaBoost were used for predictive modeling. The ability of the models to generalize was achieved by hyperparameter tuning with the use of cross-validation. The conducted research led to promising results of the winding temperature estimation accuracy. In the case of sensorless temperature prediction (model 1), the mean absolute percentage error MAPE was below 4.5% and the coefficient of determination R2 was above 0.909. In addition, the extension of the model with the temperature measurement on the casing (model 2) allowed reducing the error value to about 1% and increasing R2 to 0.990. The results obtained for the first proposed model show that the overheating protection of the motor can be ensured without direct temperature measurement. In addition, the introduction of a simple casing temperature measurement system allows for an estimation with accuracy suitable for compensating the motor output torque changes related to temperature.


Sign in / Sign up

Export Citation Format

Share Document