scholarly journals Vector Auto-Regressive Deep Neural Network: A Data-Driven Deep Learning-Based Directed Functional Connectivity Estimation Toolbox

2021 ◽  
Vol 15 ◽  
Author(s):  
Takuto Okuno ◽  
Alexander Woodward

An important goal in neuroscience is to elucidate the causal relationships between the brain’s different regions. This can help reveal the brain’s functional circuitry and diagnose lesions. Currently there are a lack of approaches to functional connectome estimation that leverage the state-of-the-art in deep learning architectures and training methodologies. Therefore, we propose a new framework based on a vector auto-regressive deep neural network (VARDNN) architecture. Our approach consists of a set of nodes, each with a deep neural network structure. These nodes can be mapped to any spatial sub-division based on the data to be analyzed, such as anatomical brain regions from which representative neural signals can be obtained. VARDNN learns to reproduce experimental time series data using modern deep learning training techniques. Based on this, we developed two novel directed functional connectivity (dFC) measures, namely VARDNN-DI and VARDNN-GC. We evaluated our measures against a number of existing functional connectome estimation measures, such as partial correlation and multivariate Granger causality combined with large dimensionality counter-measure techniques. Our measures outperformed them across various types of ground truth data, especially as the number of nodes increased. We applied VARDNN to fMRI data to compare the dFC between 41 healthy control vs. 32 Alzheimer’s disease subjects. Our VARDNN-DI measure detected lesioned regions consistent with previous studies and separated the two groups well in a subject-wise evaluation framework. Summarily, the VARDNN framework has powerful capabilities for whole brain dFC estimation. We have implemented VARDNN as an open-source toolbox that can be freely downloaded for researchers who wish to carry out functional connectome analysis on their own data.

Over the recent years, the term deep learning has been considered as one of the primary choice for handling huge amount of data. Having deeper hidden layers, it surpasses classical methods for detection of outlier in wireless sensor network. The Convolutional Neural Network (CNN) is a biologically inspired computational model which is one of the most popular deep learning approaches. It comprises neurons that self-optimize through learning. EEG generally known as Electroencephalography is a tool used for investigation of brain function and EEG signal gives time-series data as output. In this paper, we propose a state-of-the-art technique designed by processing the time-series data generated by the sensor nodes stored in a large dataset into discrete one-second frames and these frames are projected onto a 2D map images. A convolutional neural network (CNN) is then trained to classify these frames. The result improves detection accuracy and encouraging.


2019 ◽  
Vol 9 (7) ◽  
pp. 1487 ◽  
Author(s):  
Fei Mei ◽  
Qingliang Wu ◽  
Tian Shi ◽  
Jixiang Lu ◽  
Yi Pan ◽  
...  

Recently, a large number of distributed photovoltaic (PV) power generations have been connected to the power grid, which resulted in an increased fluctuation of the net load. Therefore, load forecasting has become more difficult. Considering the characteristics of the net load, an ultrashort-term forecasting model based on phase space reconstruction and deep neural network (DNN) is proposed, which can be divided into two steps. First, the phase space reconstruction of the net load time series data is performed using the C-C method. Second, the reconstructed data is fitted by the DNN to obtain the predicted value of the net load. The performance of this model is verified using real data. The accuracy is high in forecasting the net load under high PV penetration rate and different weather conditions.


Author(s):  
Osama A. Osman ◽  
Hesham Rakha

Distracted driving (i.e., engaging in secondary tasks) is an epidemic that threatens the lives of thousands every year. Data collected from vehicular sensor technologies and through connectivity provide comprehensive information that, if used to detect driver engagement in secondary tasks, could save thousands of lives and millions of dollars. This study investigates the possibility of achieving this goal using promising deep learning tools. Specifically, two deep neural network models (a multilayer perceptron neural network model and a long short-term memory networks [LSTMN] model) were developed to identify three secondary tasks: cellphone calling, cellphone texting, and conversation with adjacent passengers. The Second Strategic Highway Research Program Naturalistic Driving Study (SHRP 2 NDS) time series data, collected using vehicle sensor technology, were used to train and test the model. The results show excellent performance for the developed models, with a slight improvement for the LSTMN model, with overall classification accuracies ranging between 95 and 96%. Specifically, the models are able to identify the different types of secondary tasks with high accuracies of 100% for calling, 96%–97% for texting, 90%–91% for conversation, and 95%–96% for the normal driving. Based on this performance, the developed models improve on the results of a previous model developed by the author to classify the same three secondary tasks, which had an accuracy of 82%. The model is promising for use in in-vehicle driving assistance technology to report engagement in unlawful tasks or alert drivers to take over control in level 1 and 2 automated vehicles.


Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1465
Author(s):  
Taikyeong Jeong

When attempting to apply a large-scale database that holds the behavioral intelligence training data of deep neural networks, the classification accuracy of the artificial intelligence algorithm needs to reflect the behavioral characteristics of the individual. When a change in behavior is recognized, that is, a feedback model based on a data connection model is applied, an analysis of time series data is performed by extracting feature vectors and interpolating data in a deep neural network to overcome the limitations of the existing statistical analysis. Using the results of the first feedback model as inputs to the deep neural network and, furthermore, as the input values of the second feedback model, and interpolating the behavioral intelligence data, that is, context awareness and lifelog data, including physical activities, involves applying the most appropriate conditions. The results of this study show that this method effectively improves the accuracy of the artificial intelligence results. In this paper, through an experiment, after extracting the feature vector of a deep neural network and restoring the missing value, the classification accuracy was verified to improve by about 20% on average. At the same time, by adding behavioral intelligence data to the time series data, a new data connection model, the Deep Neural Network Feedback Model, was proposed, and it was verified that the classification accuracy can be improved by about 8 to 9% on average. Based on the hypothesis, the F (X′) = X model was applied to thoroughly classify the training data set and test data set to present a symmetrical balance between the data connection model and the context-aware data. In addition, behavioral activity data were extrapolated in terms of context-aware and forecasting perspectives to prove the results of the experiment.


2020 ◽  
Vol 9 (1) ◽  
pp. 2726-2733

Extensively used technique to diagnose the epilepsy is EEG. The research objective is to check the variations of frequency found in the epileptic EEG signals.. The EEG dataset were acquired from online database of the Bonn University (BU). Then, butterworth type two filter was implemented to remove the unwanted artifacts from the acquired EEG signals. Further, Multivariate Variational Mode Decomposition (MVMD) methodology was applied to decompose the denoised EEG signals. The signal decomposition helps in finding the necessary information, which required to model the complex time series data. Then, the features were extracted from decomposed signals by using fifteen entropy, linear and statistical features. In addition, ant colony optimization technique was proposed for optimizing the extracted features. The optimized feature vectors were classified by Deep Neural Network (DNN) that includes two circumstances (seizure and healthy), and (Interictal, ictal, and normal). The accuracy attained using the ant colony with deep neural network is 98.12% using the BU EEG dataset, respectively related to the existing models.


Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 99
Author(s):  
Sultan Daud Khan ◽  
Louai Alarabi ◽  
Saleh Basalamah

COVID-19 caused the largest economic recession in the history by placing more than one third of world’s population in lockdown. The prolonged restrictions on economic and business activities caused huge economic turmoil that significantly affected the financial markets. To ease the growing pressure on the economy, scientists proposed intermittent lockdowns commonly known as “smart lockdowns”. Under smart lockdown, areas that contain infected clusters of population, namely hotspots, are placed on lockdown, while economic activities are allowed to operate in un-infected areas. In this study, we proposed a novel deep learning prediction framework for the accurate prediction of hotpots. We exploit the benefits of two deep learning models, i.e., Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) and propose a hybrid framework that has the ability to extract multi time-scale features from convolutional layers of CNN. The multi time-scale features are then concatenated and provide as input to 2-layers LSTM model. The LSTM model identifies short, medium and long-term dependencies by learning the representation of time-series data. We perform a series of experiments and compare the proposed framework with other state-of-the-art statistical and machine learning based prediction models. From the experimental results, we demonstrate that the proposed framework beats other existing methods with a clear margin.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7109
Author(s):  
Chengying Zhao ◽  
Xianzhen Huang ◽  
Yuxiong Li ◽  
Muhammad Yousaf Iqbal

In recent years, prognostic and health management (PHM) has played an important role in industrial engineering. Efficient remaining useful life (RUL) prediction can ensure the development of maintenance strategies and reduce industrial losses. Recently, data-driven based deep learning RUL prediction methods have attracted more attention. The convolution neural network (CNN) is a kind of deep neural network widely used in RUL prediction. It shows great potential for application in RUL prediction. A CNN is used to extract the features of time-series data according to the spatial feature method. This way of processing features without considering the time dimension will affect the prediction accuracy of the model. On the contrary, the commonly used long short-term memory (LSTM) network considers the timing of the data. However, compared with CNN, it lacks spatial data extraction capabilities. This paper proposes a double-channel hybrid prediction model based on the CNN and a bidirectional LSTM network to avoid those drawbacks. The sliding time window is used for data preprocessing, and an improved piece-wise linear function is used for model validating. The prediction model is evaluated using the C-MAPSS dataset provided by NASA. The predicted results show the proposed prediction model to have a better prediction performance compared with other state-of-the-art models.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 131248-131255 ◽  
Author(s):  
Jordan Yeomans ◽  
Simon Thwaites ◽  
William S. P. Robertson ◽  
David Booth ◽  
Brian Ng ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yao Li

Faults occurring in the production line can cause many losses. Predicting the fault events before they occur or identifying the causes can effectively reduce such losses. A modern production line can provide enough data to solve the problem. However, in the face of complex industrial processes, this problem will become very difficult depending on traditional methods. In this paper, we propose a new approach based on a deep learning (DL) algorithm to solve the problem. First, we regard these process data as a spatial sequence according to the production process, which is different from traditional time series data. Second, we improve the long short-term memory (LSTM) neural network in an encoder-decoder model to adapt to the branch structure, corresponding to the spatial sequence. Meanwhile, an attention mechanism (AM) algorithm is used in fault detection and cause identification. Third, instead of traditional biclassification, the output is defined as a sequence of fault types. The approach proposed in this article has two advantages. On the one hand, treating data as a spatial sequence rather than a time sequence can overcome multidimensional problems and improve prediction accuracy. On the other hand, in the trained neural network, the weight vectors generated by the AM algorithm can represent the correlation between faults and the input data. This correlation can help engineers identify the cause of faults. The proposed approach is compared with some well-developed fault diagnosing methods in the Tennessee Eastman process. Experimental results show that the approach has higher prediction accuracy, and the weight vector can accurately label the factors that cause faults.


Author(s):  
Crina Deac ◽  
◽  
Gicu Călin Deac ◽  
Radu Constantin Parpală ◽  
Cicerone Laurentiu Popa ◽  
...  

Identifying the “health state” of the equipment is the domain of condition monitoring. The paper proposes a study of two models: DNN (Deep Neural Network) and CNN (Convolutional Neural Network) over an existent dataset provided by Case Western Reserve University for analyzing vibrations in fault diagnosis. After the model is trained on the windowed dataset using an optimal learning rate, minimizing the cost function, and is tested by computing the loss, accuracy and precision across the results, the weights are saved, and the models can be tested on other real data. The trained model recognizes raw time series data collected by micro electro-mechanical accelerometer sensors and detects anomalies based on former times series entries.


Sign in / Sign up

Export Citation Format

Share Document