scholarly journals ELMAN-RECURRENT NEURAL NETWORK FOR LOAD SHEDDING OPTIMIZATION

SINERGI ◽  
2020 ◽  
Vol 24 (1) ◽  
pp. 29
Author(s):  
Widi Aribowo

Load shedding plays a key part in the avoidance of the power system outage. The frequency and voltage fluidity leads to the spread of a power system into sub-systems and leads to the outage as well as the severe breakdown of the system utility.  In recent years, Neural networks have been very victorious in several signal processing and control applications.  Recurrent Neural networks are capable of handling complex and non-linear problems. This paper provides an algorithm for load shedding using ELMAN Recurrent Neural Networks (RNN). Elman has proposed a partially RNN, where the feedforward connections are modifiable and the recurrent connections are fixed. The research is implemented in MATLAB and the performance is tested with a 6 bus system. The results are compared with the Genetic Algorithm (GA), Combining Genetic Algorithm with Feed Forward Neural Network (hybrid) and RNN. The proposed method is capable of assigning load releases needed and more efficient than other methods. 

2002 ◽  
Vol 12 (01) ◽  
pp. 31-43 ◽  
Author(s):  
GARY YEN ◽  
HAIMING LU

In this paper, we propose a genetic algorithm based design procedure for a multi-layer feed-forward neural network. A hierarchical genetic algorithm is used to evolve both the neural network's topology and weighting parameters. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies, including a feasibility check highlighted in literature. A multi-objective cost function is used herein to optimize the performance and topology of the evolved neural network simultaneously. In the prediction of Mackey–Glass chaotic time series, the networks designed by the proposed approach prove to be competitive, or even superior, to traditional learning algorithms for the multi-layer Perceptron networks and radial-basis function networks. Based upon the chosen cost function, a linear weight combination decision-making approach has been applied to derive an approximated Pareto-optimal solution set. Therefore, designing a set of neural networks can be considered as solving a two-objective optimization problem.


2017 ◽  
Vol 13 (2) ◽  
Author(s):  
Anish K Mathew ◽  
M V Pavan Kumar

Abstract A feed-forward neural network (FNN) and a layered recurrent neural network (LRNN) based two composition estimators, respectively, are designed for the purpose of tight product purity control for an ideal, quaternary, hypothetical, kinetically controlled, reactive distillation (RD) column. The output variables of the considered control structure i.e. the compositions, are estimated using the chosen tray temperatures as inputs to the estimators. The performances of the estimators in the control of the column for the servo, regulatory, feed impurity disturbances and catalyst deactivation are studied. The estimator based control is found to be effective for the on-spec product purity control. One-to-one relation between the number of tray temperature measurements and their sensitivity to the accuracy of estimation is observed. Overall, the performance of LRNN is found to be superior over the FNN for the throughput manipulations tested for the more number  of inputs to estimator.


2000 ◽  
Vol 176 ◽  
pp. 135-136
Author(s):  
Toshiki Aikawa

AbstractSome pulsating post-AGB stars have been observed with an Automatic Photometry Telescope (APT) and a considerable amount of precise photometric data has been accumulated for these stars. The datasets, however, are still sparse, and this is a problem for applying nonlinear time series: for instance, modeling of attractors by the artificial neural networks (NN) to the datasets. We propose the optimization of data interpolations with the genetic algorithm (GA) and the hybrid system combined with NN. We apply this system to the Mackey–Glass equation, and attempt an analysis of the photometric data of post-AGB variables.


2004 ◽  
Vol 213 ◽  
pp. 483-486
Author(s):  
David Brodrick ◽  
Douglas Taylor ◽  
Joachim Diederich

A recurrent neural network was trained to detect the time-frequency domain signature of narrowband radio signals against a background of astronomical noise. The objective was to investigate the use of recurrent networks for signal detection in the Search for Extra-Terrestrial Intelligence, though the problem is closely analogous to the detection of some classes of Radio Frequency Interference in radio astronomy.


2019 ◽  
Author(s):  
Stefan L. Frank ◽  
John Hoeks

Recurrent neural network (RNN) models of sentence processing have recently displayed a remarkable ability to learn aspects of structure comprehension, as evidenced by their ability to account for reading times on sentences with local syntactic ambiguities (i.e., garden-path effects). Here, we investigate if these models can also simulate the effect of semantic appropriateness of the ambiguity's readings. RNNs-based estimates of surprisal of the disambiguating verb of sentences with an NP/S-coordination ambiguity (as in `The wizard guards the king and the princess protects ...') show identical patters to human reading times on the same sentences: Surprisal is higher on ambiguous structures than on their disambiguated counterparts and this effect is weaker, but not absent, in cases of poor thematic fit between the verb and its potential object (`The teacher baked the cake and the baker made ...'). These results show that an RNN is able to simultaneously learn about structural and semantic relations between words and suggest that garden-path phenomena may be more closely related to word predictability than traditionally assumed.


Inventions ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 70
Author(s):  
Elena Solovyeva ◽  
Ali Abdullah

In this paper, the structure of a separable convolutional neural network that consists of an embedding layer, separable convolutional layers, convolutional layer and global average pooling is represented for binary and multiclass text classifications. The advantage of the proposed structure is the absence of multiple fully connected layers, which is used to increase the classification accuracy but raises the computational cost. The combination of low-cost separable convolutional layers and a convolutional layer is proposed to gain high accuracy and, simultaneously, to reduce the complexity of neural classifiers. Advantages are demonstrated at binary and multiclass classifications of written texts by means of the proposed networks under the sigmoid and Softmax activation functions in convolutional layer. At binary and multiclass classifications, the accuracy obtained by separable convolutional neural networks is higher in comparison with some investigated types of recurrent neural networks and fully connected networks.


2020 ◽  
Vol 49 (4) ◽  
pp. 482-494
Author(s):  
Jurgita Kapočiūtė-Dzikienė ◽  
Senait Gebremichael Tesfagergish

Deep Neural Networks (DNNs) have proven to be especially successful in the area of Natural Language Processing (NLP) and Part-Of-Speech (POS) tagging—which is the process of mapping words to their corresponding POS labels depending on the context. Despite recent development of language technologies, low-resourced languages (such as an East African Tigrinya language), have received too little attention. We investigate the effectiveness of Deep Learning (DL) solutions for the low-resourced Tigrinya language of the Northern-Ethiopic branch. We have selected Tigrinya as the testbed example and have tested state-of-the-art DL approaches seeking to build the most accurate POS tagger. We have evaluated DNN classifiers (Feed Forward Neural Network – FFNN, Long Short-Term Memory method – LSTM, Bidirectional LSTM, and Convolutional Neural Network – CNN) on a top of neural word2vec word embeddings with a small training corpus known as Nagaoka Tigrinya Corpus. To determine the best DNN classifier type, its architecture and hyper-parameter set both manual and automatic hyper-parameter tuning has been performed. BiLSTM method was proved to be the most suitable for our solving task: it achieved the highest accuracy equal to 92% that is 65% above the random baseline.


Sign in / Sign up

Export Citation Format

Share Document