Composite load modeling by spatial-temporal deep attention network based on wide-area monitoring systems

2021 ◽  
pp. 1-12
Author(s):  
Omid Izadi Ghafarokhi ◽  
Mazda Moattari ◽  
Ahmad Forouzantabar

With the development of the wide-area monitoring system (WAMS), power system operators are capable of providing an accurate and fast estimation of time-varying load parameters. This study proposes a spatial-temporal deep network-based new attention concept to capture the dynamic and static patterns of electrical load consumption through modeling complicated and non-stationary interdependencies between time sequences. The designed deep attention-based network benefits from long short-term memory (LSTM) based component to learning temporal features in time and frequency-domains as encoder-decoder based recurrent neural network. Furthermore, to inherently learn spatial features, a convolutional neural network (CNN) based attention mechanism is developed. Besides, this paper develops a loss function based on a pseudo-Huber concept to enhance the robustness of the proposed network in noisy conditions as well as improve the training performance. The simulation results on IEEE 68-bus demonstrates the effectiveness and superiority of the proposed network through comparison with several previously presented and state-of-the-art methods.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Soheila Gheisari ◽  
Sahar Shariflou ◽  
Jack Phu ◽  
Paul J. Kennedy ◽  
Ashish Agar ◽  
...  

AbstractGlaucoma, a leading cause of blindness, is a multifaceted disease with several patho-physiological features manifesting in single fundus images (e.g., optic nerve cupping) as well as fundus videos (e.g., vascular pulsatility index). Current convolutional neural networks (CNNs) developed to detect glaucoma are all based on spatial features embedded in an image. We developed a combined CNN and recurrent neural network (RNN) that not only extracts the spatial features in a fundus image but also the temporal features embedded in a fundus video (i.e., sequential images). A total of 1810 fundus images and 295 fundus videos were used to train a CNN and a combined CNN and Long Short-Term Memory RNN. The combined CNN/RNN model reached an average F-measure of 96.2% in separating glaucoma from healthy eyes. In contrast, the base CNN model reached an average F-measure of only 79.2%. This proof-of-concept study demonstrates that extracting spatial and temporal features from fundus videos using a combined CNN and RNN, can markedly enhance the accuracy of glaucoma detection.


2021 ◽  
Vol 11 (3) ◽  
pp. 1327
Author(s):  
Rui Zhang ◽  
Zhendong Yin ◽  
Zhilu Wu ◽  
Siyang Zhou

Automatic Modulation Classification (AMC) is of paramount importance in wireless communication systems. Existing methods usually adopt a single category of neural network or stack different categories of networks in series, and rarely extract different types of features simultaneously in a proper way. When it comes to the output layer, softmax function is applied for classification to expand the inter-class distance. In this paper, we propose a hybrid parallel network for the AMC problem. Our proposed method designs a hybrid parallel structure which utilizes Convolution Neural Network (CNN) and Gate Rate Unit (GRU) to extract spatial features and temporal features respectively. Instead of superposing these two categories of features directly, three different attention mechanisms are applied to assign weights for different types of features. Finally, a cosine similarity metric named Additive Margin softmax function, which can expand the inter-class distance and compress the intra-class distance simultaneously, is adopted for output. Simulation results demonstrate that the proposed method can achieve remarkable performance on an open access dataset.


2016 ◽  
Vol 49 (27) ◽  
pp. 85-90 ◽  
Author(s):  
Alexandru Nechifor ◽  
Mihaela Albu ◽  
Richard Hair ◽  
Vladimir Terzija

2021 ◽  
Author(s):  
Paolo Castello ◽  
Carlo Muscas ◽  
Paolo Attilio Pegoraro ◽  
Sara Sulis ◽  
Giorgio Maria Giannuzzi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document