Outlier Detection Using Convolutional Neural Network for Wireless Sensor Network

Over the recent years, the term deep learning has been considered as one of the primary choice for handling huge amount of data. Having deeper hidden layers, it surpasses classical methods for detection of outlier in wireless sensor network. The Convolutional Neural Network (CNN) is a biologically inspired computational model which is one of the most popular deep learning approaches. It comprises neurons that self-optimize through learning. EEG generally known as Electroencephalography is a tool used for investigation of brain function and EEG signal gives time-series data as output. In this paper, we propose a state-of-the-art technique designed by processing the time-series data generated by the sensor nodes stored in a large dataset into discrete one-second frames and these frames are projected onto a 2D map images. A convolutional neural network (CNN) is then trained to classify these frames. The result improves detection accuracy and encouraging.

Sensors ◽  
2017 ◽  
Vol 17 (6) ◽  
pp. 1221 ◽  
Author(s):  
Siddhartha Bhandari ◽  
Neil Bergmann ◽  
Raja Jurdak ◽  
Branislav Kusy

2018 ◽  
Vol 7 (11) ◽  
pp. 418 ◽  
Author(s):  
Tian Jiang ◽  
Xiangnan Liu ◽  
Ling Wu

Accurate and timely information about rice planting areas is essential for crop yield estimation, global climate change and agricultural resource management. In this study, we present a novel pixel-level classification approach that uses convolutional neural network (CNN) model to extract the features of enhanced vegetation index (EVI) time series curve for classification. The goal is to explore the practicability of deep learning techniques for rice recognition in complex landscape regions, where rice is easily confused with the surroundings, by using mid-resolution remote sensing images. A transfer learning strategy is utilized to fine tune a pre-trained CNN model and obtain the temporal features of the EVI curve. Support vector machine (SVM), a traditional machine learning approach, is also implemented in the experiment. Finally, we evaluate the accuracy of the two models. Results show that our model performs better than SVM, with the overall accuracies being 93.60% and 91.05%, respectively. Therefore, this technique is appropriate for estimating rice planting areas in southern China on the basis of a pre-trained CNN model by using time series data. And more opportunity and potential can be found for crop classification by remote sensing and deep learning technique in the future study.


Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1758
Author(s):  
Shangyi Yang ◽  
Chao Sun ◽  
Youngok Kim

Indoor localization schemes have significant potential for use in location-based services in areas such as smart factories, mixed reality, and indoor navigation. In particular, received signal strength (RSS)-based fingerprinting is used widely, given its simplicity and low hardware requirements. However, most studies tend to focus on estimating the 2D position of the target. Moreover, it is known that the fingerprinting scheme is computationally costly, and its positioning accuracy is readily affected by random fluctuations in the RSS values caused by fading and the multipath effect. We propose an indoor 3D localization scheme based on both fingerprinting and a 1D convolutional neural network (CNN). Instead of using the conventional fingerprint matching method, we transform the 3D positioning problem into a classification problem and use the 1D CNN model with the RSS time-series data from Bluetooth low-energy beacons for classification. By using the 1D CNN with the time-series data from multiple beacons, the inherent drawback of RSS-based fingerprinting, namely, its susceptibility to noise and randomness, is overcome, resulting in enhanced positioning accuracy. To evaluate the proposed scheme, we developed a 3D positioning system and performed comprehensive tests, whose results confirmed that the scheme significantly outperforms the conventional common spatial pattern classification algorithm.


2022 ◽  
Vol 258 (1) ◽  
pp. 12
Author(s):  
Vlad Landa ◽  
Yuval Reuveni

Abstract Space weather phenomena such as solar flares have a massive destructive power when they reach a certain magnitude. Here, we explore the deep-learning approach in order to build a solar flare-forecasting model, while examining its limitations and feature-extraction ability based on the available Geostationary Operational Environmental Satellite (GOES) X-ray time-series data. We present a multilayer 1D convolutional neural network to forecast the solar flare event probability occurrence of M- and X-class flares at 1, 3, 6, 12, 24, 48, 72, and 96 hr time frames. The forecasting models were trained and evaluated in two different scenarios: (1) random selection and (2) chronological selection, which were compared afterward in terms of common score metrics. Additionally, we also compared our results to state-of-the-art flare-forecasting models. The results indicates that (1) when X-ray time-series data are used alone, the suggested model achieves higher score results for X-class flares and similar scores for M-class as in previous studies. (2) The two different scenarios obtain opposite results for the X- and M-class flares. (3) The suggested model combined with solely X-ray time-series fails to distinguish between M- and X-class magnitude solar flare events. Furthermore, based on the suggested method, the achieved scores, obtained solely from X-ray time-series measurements, indicate that substantial information regarding the solar activity and physical processes are encapsulated in the data, and augmenting additional data sets, both spatial and temporal, may lead to better predictions, while gaining a comprehensive physical interpretation regarding solar activity. All source codes are available at https://github.com/vladlanda.


PLoS ONE ◽  
2018 ◽  
Vol 13 (5) ◽  
pp. e0196251 ◽  
Author(s):  
Jungmo Ahn ◽  
JaeYeon Park ◽  
Donghwan Park ◽  
Jeongyeup Paek ◽  
JeongGil Ko

2021 ◽  
Vol 19 (2) ◽  
pp. 1195-1212
Author(s):  
Xiaoguang Liu ◽  
◽  
Meng Chen ◽  
Tie Liang ◽  
Cunguang Lou ◽  
...  

<abstract> <p>Gait recognition is an emerging biometric technology that can be used to protect the privacy of wearable device owners. To improve the performance of the existing gait recognition method based on wearable devices and to reduce the memory size of the model and increase its robustness, a new identification method based on multimodal fusion of gait cycle data is proposed. In addition, to preserve the time-dependence and correlation of the data, we convert the time-series data into two-dimensional images using the Gramian angular field (GAF) algorithm. To address the problem of high model complexity in existing methods, we propose a lightweight double-channel depthwise separable convolutional neural network (DC-DSCNN) model for gait recognition for wearable devices. Specifically, the time series data of gait cycles and GAF images are first transferred to the upper and lower layers of the DC-DSCNN model. The gait features are then extracted with a three-layer depthwise separable convolutional neural network (DSCNN) module. Next, the extracted features are transferred to a softmax classifier to implement gait recognition. To evaluate the performance of the proposed method, the gait dataset of 24 subjects were collected. Experimental results show that the recognition accuracy of the DC-DSCNN algorithm is 99.58%, and the memory usage of the model is only 972 KB, which verifies that the proposed method can enable gait recognition for wearable devices with lower power consumption and higher real-time performance.</p> </abstract>


Sign in / Sign up

Export Citation Format

Share Document