scholarly journals Low-dimensional Convolutional Neural Network for Solar Flares GOES Time-series Classification

2022 ◽  
Vol 258 (1) ◽  
pp. 12
Author(s):  
Vlad Landa ◽  
Yuval Reuveni

Abstract Space weather phenomena such as solar flares have a massive destructive power when they reach a certain magnitude. Here, we explore the deep-learning approach in order to build a solar flare-forecasting model, while examining its limitations and feature-extraction ability based on the available Geostationary Operational Environmental Satellite (GOES) X-ray time-series data. We present a multilayer 1D convolutional neural network to forecast the solar flare event probability occurrence of M- and X-class flares at 1, 3, 6, 12, 24, 48, 72, and 96 hr time frames. The forecasting models were trained and evaluated in two different scenarios: (1) random selection and (2) chronological selection, which were compared afterward in terms of common score metrics. Additionally, we also compared our results to state-of-the-art flare-forecasting models. The results indicates that (1) when X-ray time-series data are used alone, the suggested model achieves higher score results for X-class flares and similar scores for M-class as in previous studies. (2) The two different scenarios obtain opposite results for the X- and M-class flares. (3) The suggested model combined with solely X-ray time-series fails to distinguish between M- and X-class magnitude solar flare events. Furthermore, based on the suggested method, the achieved scores, obtained solely from X-ray time-series measurements, indicate that substantial information regarding the solar activity and physical processes are encapsulated in the data, and augmenting additional data sets, both spatial and temporal, may lead to better predictions, while gaining a comprehensive physical interpretation regarding solar activity. All source codes are available at https://github.com/vladlanda.

Over the recent years, the term deep learning has been considered as one of the primary choice for handling huge amount of data. Having deeper hidden layers, it surpasses classical methods for detection of outlier in wireless sensor network. The Convolutional Neural Network (CNN) is a biologically inspired computational model which is one of the most popular deep learning approaches. It comprises neurons that self-optimize through learning. EEG generally known as Electroencephalography is a tool used for investigation of brain function and EEG signal gives time-series data as output. In this paper, we propose a state-of-the-art technique designed by processing the time-series data generated by the sensor nodes stored in a large dataset into discrete one-second frames and these frames are projected onto a 2D map images. A convolutional neural network (CNN) is then trained to classify these frames. The result improves detection accuracy and encouraging.


Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1758
Author(s):  
Shangyi Yang ◽  
Chao Sun ◽  
Youngok Kim

Indoor localization schemes have significant potential for use in location-based services in areas such as smart factories, mixed reality, and indoor navigation. In particular, received signal strength (RSS)-based fingerprinting is used widely, given its simplicity and low hardware requirements. However, most studies tend to focus on estimating the 2D position of the target. Moreover, it is known that the fingerprinting scheme is computationally costly, and its positioning accuracy is readily affected by random fluctuations in the RSS values caused by fading and the multipath effect. We propose an indoor 3D localization scheme based on both fingerprinting and a 1D convolutional neural network (CNN). Instead of using the conventional fingerprint matching method, we transform the 3D positioning problem into a classification problem and use the 1D CNN model with the RSS time-series data from Bluetooth low-energy beacons for classification. By using the 1D CNN with the time-series data from multiple beacons, the inherent drawback of RSS-based fingerprinting, namely, its susceptibility to noise and randomness, is overcome, resulting in enhanced positioning accuracy. To evaluate the proposed scheme, we developed a 3D positioning system and performed comprehensive tests, whose results confirmed that the scheme significantly outperforms the conventional common spatial pattern classification algorithm.


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 1960 ◽  
Author(s):  
Lu Han ◽  
Chongchong Yu ◽  
Kaitai Xiao ◽  
Xia Zhao

This paper proposes a new method of mixed gas identification based on a convolutional neural network for time series classification. In view of the superiority of convolutional neural networks in the field of computer vision, we applied the concept to the classification of five mixed gas time series data collected by an array of eight MOX gas sensors. Existing convolutional neural networks are mostly used for processing visual data, and are rarely used in gas data classification and have great limitations. Therefore, the idea of mapping time series data into an analogous-image matrix data is proposed. Then, five kinds of convolutional neural networks—VGG-16, VGG-19, ResNet18, ResNet34 and ResNet50—were used to classify and compare five kinds of mixed gases. By adjusting the parameters of the convolutional neural networks, the final gas recognition rate is 96.67%. The experimental results show that the method can classify the gas data quickly and effectively, and effectively combine the gas time series data with classical convolutional neural networks, which provides a new idea for the identification of mixed gases.


2021 ◽  
Vol 19 (2) ◽  
pp. 1195-1212
Author(s):  
Xiaoguang Liu ◽  
◽  
Meng Chen ◽  
Tie Liang ◽  
Cunguang Lou ◽  
...  

<abstract> <p>Gait recognition is an emerging biometric technology that can be used to protect the privacy of wearable device owners. To improve the performance of the existing gait recognition method based on wearable devices and to reduce the memory size of the model and increase its robustness, a new identification method based on multimodal fusion of gait cycle data is proposed. In addition, to preserve the time-dependence and correlation of the data, we convert the time-series data into two-dimensional images using the Gramian angular field (GAF) algorithm. To address the problem of high model complexity in existing methods, we propose a lightweight double-channel depthwise separable convolutional neural network (DC-DSCNN) model for gait recognition for wearable devices. Specifically, the time series data of gait cycles and GAF images are first transferred to the upper and lower layers of the DC-DSCNN model. The gait features are then extracted with a three-layer depthwise separable convolutional neural network (DSCNN) module. Next, the extracted features are transferred to a softmax classifier to implement gait recognition. To evaluate the performance of the proposed method, the gait dataset of 24 subjects were collected. Experimental results show that the recognition accuracy of the DC-DSCNN algorithm is 99.58%, and the memory usage of the model is only 972 KB, which verifies that the proposed method can enable gait recognition for wearable devices with lower power consumption and higher real-time performance.</p> </abstract>


2020 ◽  
Vol 34 (04) ◽  
pp. 6845-6852 ◽  
Author(s):  
Xuchao Zhang ◽  
Yifeng Gao ◽  
Jessica Lin ◽  
Chang-Tien Lu

With the advance of sensor technologies, the Multivariate Time Series classification (MTSC) problem, perhaps one of the most essential problems in the time series data mining domain, has continuously received a significant amount of attention in recent decades. Traditional time series classification approaches based on Bag-of-Patterns or Time Series Shapelet have difficulty dealing with the huge amounts of feature candidates generated in high-dimensional multivariate data but have promising performance even when the training set is small. In contrast, deep learning based methods can learn low-dimensional features efficiently but suffer from a shortage of labelled data. In this paper, we propose a novel MTSC model with an attentional prototype network to take the strengths of both traditional and deep learning based approaches. Specifically, we design a random group permutation method combined with multi-layer convolutional networks to learn the low-dimensional features from multivariate time series data. To handle the issue of limited training labels, we propose a novel attentional prototype network to train the feature representation based on their distance to class prototypes with inadequate data labels. In addition, we extend our model into its semi-supervised setting by utilizing the unlabeled data. Extensive experiments on 18 datasets in a public UEA Multivariate time series archive with eight state-of-the-art baseline methods exhibit the effectiveness of the proposed model.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Jianmin Zhou ◽  
Sen Gao ◽  
Jiahui Li ◽  
Wenhao Xiong

To extract the time-series characteristics of the original bearing signals and predict the remaining useful life (RUL) more effectively, a parallel multichannel recurrent convolutional neural network (PMCRCNN) is proposed for the prediction of RUL. Firstly, the time domain, frequency domain, and time-frequency domain features are extracted from the original signal. Then, the PMCRCNN model is constructed. The front of the model is the parallel multichannel convolution unit to learn and integrate the global and local features from the time-series data. The back of the model is the recurrent convolution layer to model the temporal dependence relationship under different degradation features. Normalized life values are used as labels to train the prediction model. Finally, the RUL was predicted by the trained neural network. The proposed method is verified by full life tests of bearing. The comparison with the existing prognostics approaches of convolutional neural network (CNN) and the recurrent convolutional neural network (RCNN) models proves that the proposed method (PMCRCNN) is effective and superior in improving the accuracy of RUL prediction.


Sign in / Sign up

Export Citation Format

Share Document