scholarly journals Research on Power System Relay Protection Method Based on Machine Learning Algorithm

2019 ◽  
Vol 136 ◽  
pp. 02012
Author(s):  
Jingying Fang ◽  
Xiangyun Zhang

With the development of power industry, there are gradually high permeability distributed energy systems. However, the existing relay protection is difficult to be effectively applied in this type of power system. To solve this problem, this paper applies machine learning algorithm to power system relay protection. Firstly, the structure of power system with high permeability and distributed energy is analyzed, and the challenges which current relay protection algorithms faced are introduced in detail. Then, the artificial intelligence algorithm is introduced, and the machine learning algorithm in artificial intelligence algorithm and its application in power system are mainly studied. Finally, the power system relay protection based on machine learning algorithm is deeply studied, and the specific implementation method and implementation flow are designed. The machine learning algorithm studied in this paper is helpful to the development of technology in the field of power system relay protection.

2021 ◽  
Vol 2083 (4) ◽  
pp. 042086
Author(s):  
Yuqi Qin

Abstract Machine learning algorithm is the core of artificial intelligence, is the fundamental way to make computer intelligent, its application in all fields of artificial intelligence. Aiming at the problems of the existing algorithms in the discrete manufacturing industry, this paper proposes a new 0-1 coding method to optimize the learning algorithm, and finally proposes a learning algorithm of “IG type learning only from the best”.


Author(s):  
Ladly Patel ◽  
Kumar Abhishek Gaurav

In today's world, a huge amount of data is available. So, all the available data are analyzed to get information, and later this data is used to train the machine learning algorithm. Machine learning is a subpart of artificial intelligence where machines are given training with data and the machine predicts the results. Machine learning is being used in healthcare, image processing, marketing, etc. The aim of machine learning is to reduce the work of the programmer by doing complex coding and decreasing human interaction with systems. The machine learns itself from past data and then predict the desired output. This chapter describes machine learning in brief with different machine learning algorithms with examples and about machine learning frameworks such as tensor flow and Keras. The limitations of machine learning and various applications of machine learning are discussed. This chapter also describes how to identify features in machine learning data.


2020 ◽  
Vol 41 (7) ◽  
pp. 826-830 ◽  
Author(s):  
Arni S. R. Srinivasa Rao ◽  
Jose A. Vazquez

AbstractWe propose the use of a machine learning algorithm to improve possible COVID-19 case identification more quickly using a mobile phone–based web survey. This method could reduce the spread of the virus in susceptible populations under quarantine.


2020 ◽  
pp. practneurol-2020-002688
Author(s):  
Stephen D Auger ◽  
Benjamin M Jacobs ◽  
Ruth Dobson ◽  
Charles R Marshall ◽  
Alastair J Noyce

Modern clinical practice requires the integration and interpretation of ever-expanding volumes of clinical data. There is, therefore, an imperative to develop efficient ways to process and understand these large amounts of data. Neurologists work to understand the function of biological neural networks, but artificial neural networks and other forms of machine learning algorithm are likely to be increasingly encountered in clinical practice. As their use increases, clinicians will need to understand the basic principles and common types of algorithm. We aim to provide a coherent introduction to this jargon-heavy subject and equip neurologists with the tools to understand, critically appraise and apply insights from this burgeoning field.


Machine learning is a branch of Artificial Intelligence which is gaining importance in the 21st century with increasing processing speeds and miniaturization of sensors, the applications of Artificial Intelligence and cognitive technologies are growing rapidly. An array of ultrasonic sensors i.e., HCSR-04 is placed at different directions, collecting data for a particularinterval of a period during a particular day. The acquired sensor values are subjected to pre-processing, data analytics, and visualization. The prepared data is now split into test and train. A prediction model is designed using logistic regression and linear regression and checked for accuracy, F1 score, and precision compared.


Sign in / Sign up

Export Citation Format

Share Document