SpeeChin

Author(s):  
Ruidong Zhang ◽  
Mingyang Chen ◽  
Benjamin Steeper ◽  
Yaxuan Li ◽  
Zihan Yan ◽  
...  

This paper presents SpeeChin, a smart necklace that can recognize 54 English and 44 Chinese silent speech commands. A customized infrared (IR) imaging system is mounted on a necklace to capture images of the neck and face from under the chin. These images are first pre-processed and then deep learned by an end-to-end deep convolutional-recurrent-neural-network (CRNN) model to infer different silent speech commands. A user study with 20 participants (10 participants for each language) showed that SpeeChin could recognize 54 English and 44 Chinese silent speech commands with average cross-session accuracies of 90.5% and 91.6%, respectively. To further investigate the potential of SpeeChin in recognizing other silent speech commands, we conducted another study with 10 participants distinguishing between 72 one-syllable nonwords. Based on the results from the user studies, we further discuss the challenges and opportunities of deploying SpeeChin in real-world applications.

Author(s):  
Tuochao Chen ◽  
Yaxuan Li ◽  
Songyun Tao ◽  
Hyunchul Lim ◽  
Mose Sakashita ◽  
...  

Facial expressions are highly informative for computers to understand and interpret a person's mental and physical activities. However, continuously tracking facial expressions, especially when the user is in motion, is challenging. This paper presents NeckFace, a wearable sensing technology that can continuously track the full facial expressions using a neck-piece embedded with infrared (IR) cameras. A customized deep learning pipeline called NeckNet based on Resnet34 is developed to learn the captured infrared (IR) images of the chin and face and output 52 parameters representing the facial expressions. We demonstrated NeckFace on two common neck-mounted form factors: a necklace and a neckband (e.g., neck-mounted headphones), which was evaluated in a user study with 13 participants. The study results showed that NeckFace worked well when the participants were sitting, walking, or after remounting the device. We discuss the challenges and opportunities of using NeckFace in real-world applications.


2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Mengnan Ma ◽  
Yinlin Cheng ◽  
Xiaoyan Wei ◽  
Ziyi Chen ◽  
Yi Zhou

Abstract Background Epilepsy is one of the diseases of the nervous system, which has a large population in the world. Traditional diagnosis methods mostly depended on the professional neurologists’ reading of the electroencephalogram (EEG), which was time-consuming, inefficient, and subjective. In recent years, automatic epilepsy diagnosis of EEG by deep learning had attracted more and more attention. But the potential of deep neural networks in seizure detection had not been fully developed. Methods In this article, we used a one-dimensional convolutional neural network (1-D CNN) to replace the residual network architecture’s traditional convolutional neural network (CNN). Moreover, we combined the Independent recurrent neural network (indRNN) and CNN to form a new residual network architecture-independent convolutional recurrent neural network (RCNN). Our model can achieve an automatic diagnosis of epilepsy EEG. Firstly, the important features of EEG were learned by using the residual network architecture of 1-D CNN. Then the relationship between the sequences were learned by using the recurrent neural network. Finally, the model outputted the classification results. Results On the small sample data sets of Bonn University, our method was superior to the baseline methods and achieved 100% classification accuracy, 100% classification specificity. For the noisy real-world data, our method also exhibited powerful performance. Conclusion The model we proposed can quickly and accurately identify the different periods of EEG in an ideal condition and the real-world condition. The model can provide automatic detection capabilities for clinical epilepsy EEG detection. We hoped to provide a positive significance for the prediction of epileptic seizures EEG.


2021 ◽  
Author(s):  
Laila Rasmy ◽  
Masayuki Nigo ◽  
Bijun Sai Kannadath ◽  
Ziqian Xie ◽  
Bingyu Mao ◽  
...  

ABSTRACTBackgroundPredicting outcomes of COVID-19 patients at an early stage is critical for optimized clinical care and resource management, especially during a pandemic. Although multiple machine learning models have been proposed to address this issue, based on the need for extensive data pre-processing and feature engineering, these models have not been validated or implemented outside of the original study site.MethodsIn this study, we propose CovRNN, recurrent neural network (RNN)-based models to predict COVID-19 patients’ outcomes, using their available electronic health record (EHR) data on admission, without the need for specific feature selection or missing data imputation. CovRNN is designed to predict three outcomes: in-hospital mortality, need for mechanical ventilation, and long length of stay (LOS >7 days). Predictions are made for time-to-event risk scores (survival prediction) and all-time risk scores (binary prediction). Our models were trained and validated using heterogeneous and de-identified data of 247,960 COVID-19 patients from 87 healthcare systems, derived from the Cerner® Real-World Dataset (CRWD). External validation was performed using three test sets (approximately 53,000 patients). Further, the transferability of CovRNN was validated using 36,140 de-identified patients’ data derived from the Optum® de-identified COVID-19 Electronic Health Record v. 1015 dataset (2007–2020).FindingsCovRNN shows higher performance than do traditional models. It achieved an area under the receiving operating characteristic (AUROC) of 93% for mortality and mechanical ventilation predictions on the CRWD test set (vs. 91·5% and 90% for light gradient boost machine (LGBM) and logistic regression (LR), respectively) and 86.5% for prediction of LOS > 7 days (vs. 81·7% and 80% for LGBM and LR, respectively). For survival prediction, CovRNN achieved a C-index of 86% for mortality and 92·6% for mechanical ventilation. External validation confirmed AUROCs in similar ranges.InterpretationTrained on a large heterogeneous real-world dataset, our CovRNN model showed high prediction accuracy, good calibration, and transferability through consistently good performance on multiple external datasets. Our results demonstrate the feasibility of a COVID-19 predictive model that delivers high accuracy without the need for complex feature engineering.


2010 ◽  
Vol 2010 ◽  
pp. 1-9 ◽  
Author(s):  
Kenta Goto ◽  
Katsunari Shibata

To develop a robot that behaves flexibly in the real world, it is essential that it learns various necessary functions autonomously without receiving significant information from a human in advance. Among such functions, this paper focuses on learning “prediction” that is attracting attention recently from the viewpoint of autonomous learning. The authors point out that it is important to acquire through learning not only the way of predicting future information, but also the purposive extraction of prediction target from sensor signals. It is suggested that through reinforcement learning using a recurrent neural network, both emerge purposively and simultaneously without testing individually whether or not each piece of information is predictable. In a task where an agent gets a reward when it catches a moving object that can possibly become invisible, it was observed that the agent learned to detect the necessary factors of the object velocity before it disappeared, to relay the information among some hidden neurons, and finally to catch the object at an appropriate position and timing, considering the effects of bounces off a wall after the object became invisible.


2019 ◽  
Vol 9 (15) ◽  
pp. 3041 ◽  
Author(s):  
Qianting Li ◽  
Yong Xu

Multivariate time series are often accompanied with missing values, especially in clinical time series, which usually contain more than 80% of missing data, and the missing rates between different variables vary widely. However, few studies address these missing rate differences and extract univariate missing patterns simultaneously before mixing them in the model training procedure. In this paper, we propose a novel recurrent neural network called variable sensitive GRU (VS-GRU), which utilizes the different missing rate of each variable as another input and learns the feature of different variables separately, reducing the harmful impact of variables with high missing rates. Experiments show that VS-GRU outperforms the state-of-the-art method in two real-world clinical datasets (MIMIC-III, PhysioNet).


2020 ◽  
Vol 34 (01) ◽  
pp. 83-90
Author(s):  
Qing Guo ◽  
Zhu Sun ◽  
Jie Zhang ◽  
Yin-Leng Theng

Most existing studies on next location recommendation propose to model the sequential regularity of check-in sequences, but suffer from the severe data sparsity issue where most locations have fewer than five following locations. To this end, we propose an Attentional Recurrent Neural Network (ARNN) to jointly model both the sequential regularity and transition regularities of similar locations (neighbors). In particular, we first design a meta-path based random walk over a novel knowledge graph to discover location neighbors based on heterogeneous factors. A recurrent neural network is then adopted to model the sequential regularity by capturing various contexts that govern user mobility. Meanwhile, the transition regularities of the discovered neighbors are integrated via the attention mechanism, which seamlessly cooperates with the sequential regularity as a unified recurrent framework. Experimental results on multiple real-world datasets demonstrate that ARNN outperforms state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document