scholarly journals Pre-Earthquake Ionospheric Perturbation Identification Using CSES Data via Transfer Learning

2021 ◽  
Vol 9 ◽  
Author(s):  
Pan Xiong ◽  
Cheng Long ◽  
Huiyu Zhou ◽  
Roberto Battiston ◽  
Angelo De Santis ◽  
...  

During the lithospheric buildup to an earthquake, complex physical changes occur within the earthquake hypocenter. Data pertaining to the changes in the ionosphere may be obtained by satellites, and the analysis of data anomalies can help identify earthquake precursors. In this paper, we present a deep-learning model, SeqNetQuake, that uses data from the first China Seismo-Electromagnetic Satellite (CSES) to identify ionospheric perturbations prior to earthquakes. SeqNetQuake achieves the best performance [F-measure (F1) = 0.6792 and Matthews correlation coefficient (MCC) = 0.427] when directly trained on the CSES dataset with a spatial window centered on the earthquake epicenter with the Dobrovolsky radius and an input sequence length of 20 consecutive observations during night time. We further explore a transferring learning approach, which initially trains the model with the larger Electro-Magnetic Emissions Transmitted from the Earthquake Regions (DEMETER) dataset, and then tunes the model with the CSES dataset. The transfer-learning performance is substantially higher than that of direct learning, yielding a 12% improvement in the F1 score and a 29% improvement in the MCC value. Moreover, we compare the proposed model SeqNetQuake with other five benchmarking classifiers on an independent test set, which shows that SeqNetQuake demonstrates a 64.2% improvement in MCC and approximately a 24.5% improvement in the F1 score over the second-best convolutional neural network model. SeqNetSquake achieves significant improvement in identifying pre-earthquake ionospheric perturbation and improves the performance of earthquake prediction using the CSES data.

2021 ◽  
Author(s):  
Muhammad Sajid

Abstract Machine learning is proving its successes in all fields of life including medical, automotive, planning, engineering, etc. In the world of geoscience, ML showed impressive results in seismic fault interpretation, advance seismic attributes analysis, facies classification, and geobodies extraction such as channels, carbonates, and salt, etc. One of the challenges faced in geoscience is the availability of label data which is one of the most time-consuming requirements in supervised deep learning. In this paper, an advanced learning approach is proposed for geoscience where the machine observes the seismic interpretation activities and learns simultaneously as the interpretation progresses. Initial testing showed that through the proposed method along with transfer learning, machine learning performance is highly effective, and the machine accurately predicts features requiring minor post prediction filtering to be accepted as the optimal interpretation.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 68
Author(s):  
Liquan Zhao ◽  
Yan Liu

The transfer learning method is used to extend our existing model to more difficult scenarios, thereby accelerating the training process and improving learning performance. The conditional adversarial domain adaptation method proposed in 2018 is a particular type of transfer learning. It uses the domain discriminator to identify which images the extracted features belong to. The features are obtained from the feature extraction network. The stability of the domain discriminator directly affects the classification accuracy. Here, we propose a new algorithm to improve the predictive accuracy. First, we introduce the Lipschitz constraint condition into domain adaptation. If the constraint condition can be satisfied, the method will be stable. Second, we analyze how to make the gradient satisfy the condition, thereby deducing the modified gradient via the spectrum regularization method. The modified gradient is then used to update the parameter matrix. The proposed method is compared to the ResNet-50, deep adaptation network, domain adversarial neural network, joint adaptation network, and conditional domain adversarial network methods using the datasets that are found in Office-31, ImageCLEF-DA, and Office-Home. The simulations demonstrate that the proposed method has a better performance than other methods with respect to accuracy.


Author(s):  
Patricia O'Byrne ◽  
Patrick Jackman ◽  
Damon Berry ◽  
Hector-Hugo Franco-Pena ◽  
Michael French ◽  
...  

2017 ◽  
Author(s):  
Kavya Vaddadi ◽  
Naveen Sivadasan ◽  
Kshitij Tayal ◽  
Rajgopal Srinivasan

AbstractGenomic variations in a reference collection are naturally represented as genome variation graphs. Such graphs encode common subsequences as vertices and the variations are captured using additional vertices and directed edges. The resulting graphs are directed graphs possibly with cycles. Existing algorithms for aligning sequences on such graphs make use of partial order alignment (POA) techniques that work on directed acyclic graphs (DAG). For this, acyclic extensions of the input graphs are first constructed through expensive loop unrolling steps (DAGification). Also, such graph extensions could have considerable blow up in their size and in the worst case the blow up factor is proportional to the input sequence length. We provide a novel alignment algorithm V-ALIGN that aligns the input sequence directly on the input graph while avoiding such expensive DAGification steps. V-ALIGN is based on a novel dynamic programming formulation that allows gapped alignment directly on the input graph. It supports affine and linear gaps. We also propose refinements to V-ALIGN for better performance in practice. In this, the time to fill the DP table has linear dependence on the sizes of the sequence, the graph and its feedback vertex set. We perform experiments to compare against the POA based alignment. For aligning short sequences, standard approaches restrict the expensive gapped alignment to small filtered subgraphs having high ‘similarity’ to the input sequence. In such cases, the performance of V-ALIGN for gapped alignment on the filtered subgraph depends on the subgraph sizes.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Michael Franco-Garcia ◽  
Alex Benasutti ◽  
Larry Pearlstein ◽  
Mohammed Alabsi

Intelligent fault diagnosis utilizing deep learning algorithms has been widely investigated recently. Although previous results demonstrated excellent performance, features learned by Deep Neural Networks (DNN) are part of a large black box. Consequently, lack of understanding of underlying physical meanings embedded within the features can lead to poor performance when applied to different but related datasets i.e. transfer learning applications. This study will investigate the transfer learning performance of a Convolution Neural Network (CNN) considering 4 different operating conditions. Utilizing the Case Western Reserve University (CWRU) bearing dataset, the CNN will be trained to classify 12 classes. Each class represents a unique differentfault scenario with varying severity i.e. inner race fault of 0.007”, 0.014” diameter. Initially, zero load data will be utilized for model training and the model will be tuned until testing accuracy above 99% is obtained. The model performance will be evaluated by feeding vibration data collected when the load is varied to 1, 2 and 3 HP. Initial results indicated that the classification accuracy will degrade substantially. Hence, this paper will visualize convolution kernels in time and frequency domains and will investigate the influence of changing loads on fault characteristics, network classification mechanism and activation strength.


Author(s):  
Tongliang Liu ◽  
Qiang Yang ◽  
Dacheng Tao

Transfer learning transfers knowledge across domains to improve the learning performance. Since feature structures generally represent the common knowledge across different domains, they can be transferred successfully even though the labeling functions across domains differ arbitrarily. However, theoretical justification for this success has remained elusive. In this paper, motivated by self-taught learning, we regard a set of bases as a feature structure of a domain if the bases can (approximately) reconstruct any observation in this domain. We propose a general analysis scheme to theoretically justify that if the source and target domains share similar feature structures, the source domain feature structure is transferable to the target domain, regardless of the change of the labeling functions across domains. The transferred structure is interpreted to function as a regularization matrix which benefits the learning process of the target domain task. We prove that such transfer enables the corresponding learning algorithms to be uniformly stable. Specifically, we illustrate the existence of feature structure transfer in two well-known transfer learning settings: domain adaptation and learning to learn.


2021 ◽  
Vol 13 (1) ◽  
pp. 3
Author(s):  
Jorge Silvestre ◽  
Miguel de Santiago ◽  
Anibal Bregon ◽  
Miguel A. Martínez-Prieto ◽  
Pedro C. Álvarez-Esteban

Predictable operations are the basis of efficient air traffic management. In this context, accurately estimating the arrival time to the destination airport is fundamental to make tactical decisions about an optimal schedule of landing and take-off operations. In this paper, we evaluate different deep learning models based on LSTM architectures for predicting estimated time of arrival of commercial flights, mainly using surveillance data from OpenSky Network. We observed that the number of previous states of the flight used to make the prediction have great influence on the accuracy of the estimation, independently of the architecture. The best model, with an input sequence length of 50, has reported a MAE of 3.33 min and a RMSE of 5.42 min on the test set, with MAE values of 5.67 and 2.13 min 90 and 15 min before the end of the flight, respectively.


Sign in / Sign up

Export Citation Format

Share Document