Sliced Wasserstein Distance for Neural Style Transfer

Author(s):  
Jie Li ◽  
Dan Xu ◽  
Shaowen Yao
2019 ◽  
Author(s):  
Utsav Krishnan ◽  
Akshal Sharma ◽  
Pratik Chattopadhyay

SLEEP ◽  
2021 ◽  
Vol 44 (Supplement_2) ◽  
pp. A111-A112
Author(s):  
Austin Vandegriffe ◽  
V A Samaranayake ◽  
Matthew Thimgan

Abstract Introduction Technological innovations have broadened the type and amount of activity data that can be captured in the home and under normal living conditions. Yet, converting naturalistic activity patterns into sleep and wakefulness states has remained a challenge. Despite the successes of current algorithms, they do not fill all actigraphy needs. We have developed a novel statistical approach to determine sleep and wakefulness times, called the Wasserstein Algorithm for Classifying Sleep and Wakefulness (WACSAW), and validated the algorithm in a small cohort of healthy participants. Methods WACSAW functional routines: 1) Conversion of the triaxial movement data into a univariate time series; 2) Construction of a Wasserstein weighted sum (WSS) time series by measuring the Wasserstein distance between equidistant distributions of movement data before and after the time-point of interest; 3) Segmenting the time series by identifying changepoints based on the behavior of the WSS series; 4) Merging segments deemed similar by the Levene test; 5) Comparing segments by optimal transport methodology to determine the difference from a flat, invariant distribution at zero. The resulting histogram can be used to determine sleep and wakefulness parameters around a threshold determined for each individual based on histogram properties. To validate the algorithm, participants wore the GENEActiv and a commercial grade actigraphy watch for 48 hours. The accuracy of WACSAW was compared to a detailed activity log and benchmarked against the results of the output from commercial wrist actigraph. Results WACSAW performed with an average accuracy, sensitivity, and specificity of >95% compared to detailed activity logs in 10 healthy-sleeping individuals of mixed sexes and ages. We then compared WACSAW’s performance against a common wrist-worn, commercial sleep monitor. WACSAW outperformed the commercial grade system in each participant compared to activity logs and the variability between subjects was cut substantially. Conclusion The performance of WACSAW demonstrates good results in a small test cohort. In addition, WACSAW is 1) open-source, 2) individually adaptive, 3) indicates individual reliability, 4) based on the activity data stream, and 5) requires little human intervention. WACSAW is worthy of validating against polysomnography and in patients with sleep disorders to determine its overall effectiveness. Support (if any):


Author(s):  
Xide Xia ◽  
Tianfan Xue ◽  
Wei-sheng Lai ◽  
Zheng Sun ◽  
Abby Chang ◽  
...  
Keyword(s):  

Author(s):  
Yingying Deng ◽  
Fan Tang ◽  
Weiming Dong ◽  
Wen Sun ◽  
Feiyue Huang ◽  
...  
Keyword(s):  

2021 ◽  
pp. 1-12
Author(s):  
Mukul Kumar ◽  
Nipun Katyal ◽  
Nersisson Ruban ◽  
Elena Lyakso ◽  
A. Mary Mekala ◽  
...  

Over the years the need for differentiating various emotions from oral communication plays an important role in emotion based studies. There have been different algorithms to classify the kinds of emotion. Although there is no measure of fidelity of the emotion under consideration, which is primarily due to the reason that most of the readily available datasets that are annotated are produced by actors and not generated in real-world scenarios. Therefore, the predicted emotion lacks an important aspect called authenticity, which is whether an emotion is actual or stimulated. In this research work, we have developed a transfer learning and style transfer based hybrid convolutional neural network algorithm to classify the emotion as well as the fidelity of the emotion. The model is trained on features extracted from a dataset that contains stimulated as well as actual utterances. We have compared the developed algorithm with conventional machine learning and deep learning techniques by few metrics like accuracy, Precision, Recall and F1 score. The developed model performs much better than the conventional machine learning and deep learning models. The research aims to dive deeper into human emotion and make a model that understands it like humans do with precision, recall, F1 score values of 0.994, 0.996, 0.995 for speech authenticity and 0.992, 0.989, 0.99 for speech emotion classification respectively.


Sign in / Sign up

Export Citation Format

Share Document