scholarly journals Prediction of Human Full-Body Movements with Motion Optimization and Recurrent Neural Networks

Author(s):  
Philipp Kratzer ◽  
Marc Toussaint ◽  
Jim Mainprice
SLEEP ◽  
2020 ◽  
Vol 43 (9) ◽  
Author(s):  
Pedro Fonseca ◽  
Merel M van Gilst ◽  
Mustafa Radha ◽  
Marco Ross ◽  
Arnaud Moreau ◽  
...  

Abstract Study Objectives To validate a previously developed sleep staging algorithm using heart rate variability (HRV) and body movements in an independent broad cohort of unselected sleep disordered patients. Methods We applied a previously designed algorithm for automatic sleep staging using long short-term memory recurrent neural networks to model sleep architecture. The classifier uses 132 HRV features computed from electrocardiography and activity counts from accelerometry. We retrained our algorithm using two public datasets containing both healthy sleepers and sleep disordered patients. We then tested the performance of the algorithm on an independent hold-out validation set of sleep recordings from a wide range of sleep disorders collected in a tertiary sleep medicine center. Results The classifier achieved substantial agreement on four-class sleep staging (wake/N1–N2/N3/rapid eye movement [REM]), with an average κ of 0.60 and accuracy of 75.9%. The performance of the sleep staging algorithm was significantly higher in insomnia patients (κ = 0.62, accuracy = 77.3%). Only in REM parasomnias, the performance was significantly lower (κ = 0.47, accuracy = 70.5%). For two-class wake/sleep classification, the classifier achieved a κ of 0.65, with a sensitivity (to wake) of 72.9% and specificity of 94.0%. Conclusions This study shows that the combination of HRV, body movements, and a state-of-the-art deep neural network can reach substantial agreement in automatic sleep staging compared with polysomnography, even in patients suffering from a multitude of sleep disorders. The physiological signals required can be obtained in various ways, including non-obtrusive wrist-worn sensors, opening up new avenues for clinical diagnostics.


2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Author(s):  
Faisal Ladhak ◽  
Ankur Gandhe ◽  
Markus Dreyer ◽  
Lambert Mathias ◽  
Ariya Rastrow ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document