scholarly journals Time-resolved correspondences between deep neural network layers and EEG measurements in object processing

2020 ◽  
Vol 172 ◽  
pp. 27-45 ◽  
Author(s):  
Nathan C.L. Kong ◽  
Blair Kaneshiro ◽  
Daniel L.K. Yamins ◽  
Anthony M. Norcia
Author(s):  
Mohammad Khalid Pandit ◽  
Roohie Naaz Mir ◽  
Mohammad Ahsan Chishti

Background: Deep neural networks have become the state of the art technology for real- world classification tasks due to their ability to learn better feature representations at each layer. However, the added accuracy that is associated with the deeper layers comes at a huge cost of computation, energy and added latency. Objective: The implementations of such architectures in resource constraint IoT devices are computationally prohibitive due to its computational and memory requirements. These factors are particularly severe in IoT domain. In this paper, we propose the Adaptive Deep Neural Network (ADNN) which gets split across the compute hierarchical layers i.e. edge, fog and cloud with all splits having one or more exit locations. Methods: At every location, the data sample adaptively chooses to exit from the NN (based on confidence criteria) or get fed into deeper layers housed across different compute layers. Design of ADNN, an adaptive deep neural network which results in fast and energy- efficient decision making (inference). : Joint optimization of all the exit points in ADNN such that the overall loss is minimized. Results: Experiments on MNIST dataset show that 41.9% of samples exit at the edge location (correctly classified) and 49.7% of samples exit at fog layer. Similar results are obtained on fashion MNIST dataset with only 19.4% of the samples requiring the entire neural network layers. With this architecture, most of the data samples are locally processed and classified while maintaining the classification accuracy and also keeping in check the communication, energy and latency requirements for time sensitive IoT applications. Conclusion: We investigated the approach of distributing the layers of the deep neural network across edge, fog and the cloud computing devices wherein data samples adaptively choose the exit points to classify themselves based on the confidence criteria (threshold). The results show that the majority of the data samples are classified within the private network of the user (edge, fog) while only a few samples require the entire layers of ADNN for classification.


2021 ◽  
Vol 27 (S1) ◽  
pp. 262-264
Author(s):  
Joshua Vincent ◽  
Sreyas Mohan ◽  
Ramon Manzorro ◽  
Binh Tang ◽  
Dev Sheth ◽  
...  

2021 ◽  
Author(s):  
◽  
Majid Ashouri

The rapidly evolving Internet of Things (IoT) systems demands addressing new requirements. This particularly needs efficient deployment of IoT systems to meet the quality requirements such as latency, energy consumption, privacy, and bandwidth utilization. The increasing availability of computational resources close to the edge has prompted the idea of using these for distributed computing and storage, known as edge computing. Edge computing may help and complement cloud computing to facilitate deployment of IoT systems and improve their quality. However, deciding where to deploy the various application components is not a straightforward task, and IoT system designer should be supported for the decision. To support the designers, in this thesis we focused on the system qualities, and aimed for three main contributions. First, by reviewing the literature, we identified the relevant and most used qualities and metrics. Moreover, to analyse how computer simulation can be used as a supporting tool, we investigated the edge computing simulators, and in particular the metrics they provide for modeling and analyzing IoT systems in edge computing. Finally, we introduced a method to represent how multiple qualities can be considered in the decision. In particular, we considered distributing Deep Neural Network layers as a use case and raked the deployment options by measuring the relevant metrics via simulation.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
J.M Gregoire ◽  
C Gilon ◽  
S Carlier ◽  
H Bersini

Abstract Background The identification of patients still in sinus rhythm who will present one month later an atrial fibrillation episode is possible using machine learning (ML) techniques. However, these new ML algorithms do not provide any relevant information about the underlying pathophysiology. Purpose To compare the predictive performance for forecasting AF between a machine learning algorithm and other parameters whose pathophysiological mechanisms are known to play a role in the triggering of arrhythmias (i.e. the count of premature beats (PB) and heart rate variability (HRV) parameters) Material and methods We conducted a retrospective study from an outpatient clinic. 10484 Holter ECG recordings were screened. 250 analysable AF onsets were labelled. We developed a deep neural network model composed of convolutional neural network layers and bidirectional gated recurrent units as recurrent neural network layers that was trained for the forecast of paroxysmal AF episodes, using RR intervals variations. This model works like a black box. For comparison purposes, we used a “random forest” (RF) model of ML to obtain forecast results using HRV parameters with and without PB. This model allows the evaluation of the relevance of HRV parameters and of PB used for the forecast. We calculated the area under the curve of the receiving operating characteristic curve for the different time windows counted in RR intervals before the AF onset. Results As shown in the table, the forecasting value of the deep neural network model (ML) was not superior to the random forest algorithm. Prediction value of both decreased when analyzing the RR intervals further away from the onset of AF Conclusions These results suggest that HRV plays a predominant role in triggering AF episodes and that premature beats could add minor information. Moreover, the closer the window from AF onset, the better the accuracy, regardless of the method used. Such detection algorithms once implemented in pacemakers, might prove useful to prevent AF onset by changing pacing sequence while patients would still be in sinus rhythm, however this remains to be demonstrated Funding Acknowledgement Type of funding source: None


2019 ◽  
Vol 140 ◽  
pp. 167-174 ◽  
Author(s):  
Wei Zhao ◽  
Bin Han ◽  
Yong Yang ◽  
Mark Buyyounouski ◽  
Steven L. Hancock ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document