unsupervised feature learning
Recently Published Documents


TOTAL DOCUMENTS

217
(FIVE YEARS 72)

H-INDEX

27
(FIVE YEARS 5)

2021 ◽  
Vol 16 (6) ◽  
Author(s):  
Yongzhong Li ◽  
Jiawei Xi ◽  
Casey Ka Wun Leung ◽  
Tan Li ◽  
Wing Yim Tam ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260612
Author(s):  
Jong-Hwan Jang ◽  
Tae Young Kim ◽  
Hong-Seok Lim ◽  
Dukyong Yoon

Most existing electrocardiogram (ECG) feature extraction methods rely on rule-based approaches. It is difficult to manually define all ECG features. We propose an unsupervised feature learning method using a convolutional variational autoencoder (CVAE) that can extract ECG features with unlabeled data. We used 596,000 ECG samples from 1,278 patients archived in biosignal databases from intensive care units to train the CVAE. Three external datasets were used for feature validation using two approaches. First, we explored the features without an additional training process. Clustering, latent space exploration, and anomaly detection were conducted. We confirmed that CVAE features reflected the various types of ECG rhythms. Second, we applied CVAE features to new tasks as input data and CVAE weights to weight initialization for different models for transfer learning for the classification of 12 types of arrhythmias. The f1-score for arrhythmia classification with extreme gradient boosting was 0.86 using CVAE features only. The f1-score of the model in which weights were initialized with the CVAE encoder was 5% better than that obtained with random initialization. Unsupervised feature learning with CVAE can extract the characteristics of various types of ECGs and can be an alternative to the feature extraction method for ECGs.


Author(s):  
Nayeeb Rashid ◽  
Md Adnan Faisal Hossain ◽  
Mohammad Ali ◽  
Mumtahina Islam Sukanya ◽  
Tanvir Mahmud ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Qing Ye ◽  
Changhua Liu

Traditional diagnostic framework consists of three parts: data acquisition, feature generation, and fault classification. However, manual feature extraction utilized signal processing technologies heavily depending on subjectivity and prior knowledge which affect the effectiveness and efficiency. To tackle these problems, an unsupervised deep feature learning model based on parallel convolutional autoencoder (PCAE) is proposed and applied in the stage of feature generation of diagnostic framework. Firstly, raw vibration signals are normalized and segmented into sample set by sliding window. Secondly, deep features are, respectively, extracted from reshaped form of raw sample set and spectrogram in time-frequency domain by two parallel unsupervised feature learning branches based on convolutional autoencoder (CAE). During the training process, dropout regularization and batch normalization are utilized to prevent over fitting. Finally, extracted representative features are feed into the classification model based on deep structure of neural network (DNN) with softmax. The effectiveness of the proposed approach is evaluated in fault diagnosis of automobile main reducer. The results produced in contrastive analysis demonstrate that the diagnostic framework based on parallel unsupervised feature learning and deep structure of classification can effectively enhance the robustness and enhance the identification accuracy of operation conditions by nearly 8%.


Author(s):  
Zonghua Liu ◽  
Thangavel Thevar ◽  
Tomoko Takahashi ◽  
Nicholas Burns ◽  
Takaki Yamada ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2086
Author(s):  
Yangwei Ying ◽  
Yuanwu Tu ◽  
Hong Zhou

Speech signals contain abundant information on personal emotions, which plays an important part in the representation of human potential characteristics and expressions. However, the deficiency of emotion speech data affects the development of speech emotion recognition (SER), which also limits the promotion of recognition accuracy. Currently, the most effective approach is to make use of unsupervised feature learning techniques to extract speech features from available speech data and generate emotion classifiers with these features. In this paper, we proposed to implement autoencoders such as a denoising autoencoder (DAE) and an adversarial autoencoder (AAE) to extract the features from LibriSpeech for model pre-training, and then conducted experiments on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) datasets for classification. Considering the imbalance of data distribution in IEMOCAP, we developed a novel data augmentation approach to optimize the overlap shift between consecutive segments and redesigned the data division. The best classification accuracy reached 78.67% (weighted accuracy, WA) and 76.89% (unweighted accuracy, UA) with AAE. Compared with state-of-the-art results to our knowledge (76.18% of WA and 76.36% of UA with the supervised learning method), we achieved a slight advantage. This suggests that using unsupervised learning benefits the development of SER and provides a new approach to eliminate the problem of data scarcity.


2021 ◽  
Vol 1994 (1) ◽  
pp. 012010
Author(s):  
Shixuan An ◽  
Ruicheng Lu ◽  
Tianyi Zhang

Sign in / Sign up

Export Citation Format

Share Document