scholarly journals Diagnosis of hearing deficiency using EEG based AEP signals: CWT and improved-VGG16 pipeline

2021 ◽  
Vol 7 ◽  
pp. e638
Author(s):  
Md Nahidul Islam ◽  
Norizam Sulaiman ◽  
Fahmid Al Farid ◽  
Jia Uddin ◽  
Salem A. Alyami ◽  
...  

Hearing deficiency is the world’s most common sensation of impairment and impedes human communication and learning. Early and precise hearing diagnosis using electroencephalogram (EEG) is referred to as the optimum strategy to deal with this issue. Among a wide range of EEG control signals, the most relevant modality for hearing loss diagnosis is auditory evoked potential (AEP) which is produced in the brain’s cortex area through an auditory stimulus. This study aims to develop a robust intelligent auditory sensation system utilizing a pre-train deep learning framework by analyzing and evaluating the functional reliability of the hearing based on the AEP response. First, the raw AEP data is transformed into time-frequency images through the wavelet transformation. Then, lower-level functionality is eliminated using a pre-trained network. Here, an improved-VGG16 architecture has been designed based on removing some convolutional layers and adding new layers in the fully connected block. Subsequently, the higher levels of the neural network architecture are fine-tuned using the labelled time-frequency images. Finally, the proposed method’s performance has been validated by a reputed publicly available AEP dataset, recorded from sixteen subjects when they have heard specific auditory stimuli in the left or right ear. The proposed method outperforms the state-of-art studies by improving the classification accuracy to 96.87% (from 57.375%), which indicates that the proposed improved-VGG16 architecture can significantly deal with AEP response in early hearing loss diagnosis.

SLEEP ◽  
2020 ◽  
Vol 43 (Supplement_1) ◽  
pp. A24-A26
Author(s):  
J Hammarlund ◽  
R Anafi

Abstract Introduction We recently used unsupervised machine learning to order genome scale data along a circadian cycle. CYCLOPS (Anafi et al PNAS 2017) encodes high dimensional genomic data onto an ellipse and offers the potential to identify circadian patterns in large data-sets. This approach requires many samples from a wide range of circadian phases. Individual data-sets often lack sufficient samples. Composite expression repositories vastly increase the available data. However, these agglomerated datasets also introduce technical (e.g. processing site) and biological (e.g. age or disease) confounders that may hamper circadian ordering. Methods Using the FLUX machine learning library we expanded the CYCLOPS network. We incorporated additional encoding and decoding layers that model the influence of labeled confounding variables. These layers feed into a fully connected autoencoder with a circular bottleneck, encoding the estimated phase of each sample. The expanded network simultaneously estimates the influence of confounding variables along with circadian phase. We compared the performance of the original and expanded networks using both real and simulated expression data. In a first test, we used time-labeled data from a single-center describing human cortical samples obtained at autopsy. To generate a second, idealized processing center, we introduced gene specific biases in expression along with a bias in sample collection time. In a second test, we combined human lung biopsy data from two medical centers. Results The performance of the original CYCLOPS network degraded with the introduction of increasing, non-circadian confounds. The expanded network was able to more accurately assess circadian phase over a wider range of confounding influences. Conclusion The addition of labeled confounding variables into the network architecture improves circadian data ordering. The use of the expanded network should facilitate the application of CYCLOPS to multi-center data and expand the data available for circadian analysis. Support This work was supported by the National Cancer Institute (1R01CA227485-01)


Author(s):  
Asma Karama ◽  
Olivier Bernard ◽  
Jean-Luc Gouzé

We propose a general methodology to develop a hybrid neural model for a wide range of biotechnological processes. The hybrid neural modelling approach combines the flexibility of a neural network representation of unknown process kinetics with a global mass-balance based process description. The hybrid model is built in such a way that its trajectories keep their physical and biological meaning (mass balance, positivity of the concentrations, boundness, saturation or inhibition of kinetics) even far from the identification data conditions. We examine the constraints (a priori knowledge) that must be satisfied by the model and that provide additional conditions to be imposed on the neural network. We illustrate our approach with various biotechnological processes showing how to select the appropriate neural network architecture. The method is detailed for modelling an anaerobic wastewater treatment bioreactor using experimental data.


2021 ◽  
Author(s):  
Xing Hu ◽  
Ling Liang ◽  
chen xiaobing ◽  
Lei Deng ◽  
Yu Ji ◽  
...  

As deep neural networks (DNNs) continue their reach into a wide range of application domains, the neural network architecture of DNN models becomes an increasingly sensitive subject, due to either intellectual property protection or risks of adversarial attacks. In observing the large gap between the architectural surfaces exploration and the model integrity study, this paper first presents the formulated schema of the model leakage risks. Then, we propose DeepSniffer, a learning-based model extraction framework, to obtain the complete model architecture information without any prior knowledge of the victim model. It is robust to architectural and system noises introduced by the complex memory hierarchy and diverse run-time system optimizations. Taking GPU platforms as a showcase, DeepSniffer performs model extraction by learning both the architecture-level execution features of kernels and the inter-layer temporal association information introduced by the common practice of DNN design. We demonstrate that DeepSniffer works experimentally in the context of an off-the-shelf Nvidia GPU platform running a variety of DNN models. The extracted models are directly helpful to the attempting of crafting adversarial inputs. The DeepSniffer project has been released in https://github.com/xinghu7788/DeepSniffer.


2021 ◽  
Author(s):  
Xing Hu ◽  
Ling Liang ◽  
chen xiaobing ◽  
Lei Deng ◽  
Yu Ji ◽  
...  

As deep neural networks (DNNs) continue their reach into a wide range of application domains, the neural network architecture of DNN models becomes an increasingly sensitive subject, due to either intellectual property protection or risks of adversarial attacks. In observing the large gap between the architectural surfaces exploration and the model integrity study, this paper first presents the formulated schema of the model leakage risks. Then, we propose DeepSniffer, a learning-based model extraction framework, to obtain the complete model architecture information without any prior knowledge of the victim model. It is robust to architectural and system noises introduced by the complex memory hierarchy and diverse run-time system optimizations. Taking GPU platforms as a showcase, DeepSniffer performs model extraction by learning both the architecture-level execution features of kernels and the inter-layer temporal association information introduced by the common practice of DNN design. We demonstrate that DeepSniffer works experimentally in the context of an off-the-shelf Nvidia GPU platform running a variety of DNN models. The extracted models are directly helpful to the attempting of crafting adversarial inputs. The DeepSniffer project has been released in https://github.com/xinghu7788/DeepSniffer.


2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


This book explores the value for literary studies of relevance theory, an inferential approach to communication in which the expression and recognition of intentions plays a major role. Drawing on a wide range of examples from lyric poetry and the novel, nine of the ten chapters are written by literary specialists and use relevance theory both as an overall framework and as a resource for detailed analysis. The final chapter, written by the co-founder of relevance theory, reviews the issues addressed by the volume and explores their implications for cognitive theories of how communicative acts are interpreted in context. Originally designed to explain how people understand each other in everyday face-to-face exchanges, relevance theory—described in an early review by a literary scholar as ‘the makings of a radically new theory of communication, the first since Aristotle’s’—sheds light on the whole spectrum of human modes of communication, including literature in the broadest sense. Reading Beyond the Code is unique in using relevance theory as a prime resource for literary study, and is also the first to apply the model to a range of phenomena widely seen as supporting an ‘embodied’ conception of cognition and language where sensorimotor processes play a key role. This broadened perspective serves to enhance the value for literary studies of the central claim of relevance theory: that the ‘code model’ is fundamentally inadequate to account for human communication, and in particular for the modes of communication that are proper to literature.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tuan D. Pham

AbstractAutomated analysis of physiological time series is utilized for many clinical applications in medicine and life sciences. Long short-term memory (LSTM) is a deep recurrent neural network architecture used for classification of time-series data. Here time–frequency and time–space properties of time series are introduced as a robust tool for LSTM processing of long sequential data in physiology. Based on classification results obtained from two databases of sensor-induced physiological signals, the proposed approach has the potential for (1) achieving very high classification accuracy, (2) saving tremendous time for data learning, and (3) being cost-effective and user-comfortable for clinical trials by reducing multiple wearable sensors for data recording.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 119
Author(s):  
Tao Wang ◽  
Changhua Lu ◽  
Yining Sun ◽  
Mei Yang ◽  
Chun Liu ◽  
...  

Early detection of arrhythmia and effective treatment can prevent deaths caused by cardiovascular disease (CVD). In clinical practice, the diagnosis is made by checking the electrocardiogram (ECG) beat-by-beat, but this is usually time-consuming and laborious. In the paper, we propose an automatic ECG classification method based on Continuous Wavelet Transform (CWT) and Convolutional Neural Network (CNN). CWT is used to decompose ECG signals to obtain different time-frequency components, and CNN is used to extract features from the 2D-scalogram composed of the above time-frequency components. Considering the surrounding R peak interval (also called RR interval) is also useful for the diagnosis of arrhythmia, four RR interval features are extracted and combined with the CNN features to input into a fully connected layer for ECG classification. By testing in the MIT-BIH arrhythmia database, our method achieves an overall performance of 70.75%, 67.47%, 68.76%, and 98.74% for positive predictive value, sensitivity, F1-score, and accuracy, respectively. Compared with existing methods, the overall F1-score of our method is increased by 4.75~16.85%. Because our method is simple and highly accurate, it can potentially be used as a clinical auxiliary diagnostic tool.


Micromachines ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 284
Author(s):  
Yihsiang Chiu ◽  
Chen Wang ◽  
Dan Gong ◽  
Nan Li ◽  
Shenglin Ma ◽  
...  

This paper presents a high-accuracy complementary metal oxide semiconductor (CMOS) driven ultrasonic ranging system based on air coupled aluminum nitride (AlN) based piezoelectric micromachined ultrasonic transducers (PMUTs) using time of flight (TOF). The mode shape and the time-frequency characteristics of PMUTs are simulated and analyzed. Two pieces of PMUTs with a frequency of 97 kHz and 96 kHz are applied. One is used to transmit and the other is used to receive ultrasonic waves. The Time to Digital Converter circuit (TDC), correlating the clock frequency with sound velocity, is utilized for range finding via TOF calculated from the system clock cycle. An application specific integrated circuit (ASIC) chip is designed and fabricated on a 0.18 μm CMOS process to acquire data from the PMUT. Compared to state of the art, the developed ranging system features a wide range and high accuracy, which allows to measure the range of 50 cm with an average error of 0.63 mm. AlN based PMUT is a promising candidate for an integrated portable ranging system.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Brett H. Hokr ◽  
Joel N. Bixler

AbstractDynamic, in vivo measurement of the optical properties of biological tissues is still an elusive and critically important problem. Here we develop a technique for inverting a Monte Carlo simulation to extract tissue optical properties from the statistical moments of the spatio-temporal response of the tissue by training a 5-layer fully connected neural network. We demonstrate the accuracy of the method across a very wide parameter space on a single homogeneous layer tissue model and demonstrate that the method is insensitive to parameter selection of the neural network model itself. Finally, we propose an experimental setup capable of measuring the required information in real time in an in vivo environment and demonstrate proof-of-concept level experimental results.


Sign in / Sign up

Export Citation Format

Share Document