sparse filtering
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 37)

H-INDEX

8
(FIVE YEARS 3)

2021 ◽  
pp. 095745652110557
Author(s):  
Lifeng Chan ◽  
Chun Cheng

Detecting the mechanical faults of rotating machinery in time plays a key role in avoiding accidents. With the coming of the big data era, intelligent fault diagnosis methods based on machine learning models have become promising tools. To improve the feature learning ability, an unsupervised sparse feature learning method called variant sparse filtering is developed. Then, a fault diagnosis method combining variant sparse filtering with a back-propagation algorithm is presented. The involvement of the back-propagation algorithm can further optimize the weight matrix of variant sparse filtering using label data. At last, the developed diagnosis method is validated by rolling bearing and planetary gearbox experiments. The experiment results indicate that the developed method can achieve high accuracy and good stability in rotating machinery fault diagnosis.


2021 ◽  
Author(s):  
Sihao Lu ◽  
Mark Steadman ◽  
Grace W. Y. Ang ◽  
Andrei S. Kozlov

A central question in sensory neuroscience is how neurons represent complex natural stimuli. This process involves multiple steps of feature extraction to obtain a condensed, categorical representation useful for classification and behavior. It has previously been shown that central auditory neurons in the starling have composite receptive fields composed of multiple features when probed with conspecific songs. Whether this property is an idiosyncratic characteristic of songbirds, a group of highly specialized vocal learners, or a generic characteristic of central auditory systems in different animals is, however, unknown. To address this question, we have recorded responses from auditory cortical neurons in mice, and characterized their receptive fields using mouse ultrasonic vocalizations (USVs) as a natural and ethologically relevant stimulus and pitch-shifted starling songs as a natural but ethologically irrelevant control stimulus. We have found that auditory cortical neurons in the mouse display composite receptive fields with multiple excitatory and inhibitory subunits. Moreover, this was the case with either the conspecific or the heterospecific vocalizations. We then trained the sparse filtering algorithm on both classes of natural stimuli to obtain statistically optimal features, and compared the natural and artificial features using UMAP, a dimensionality-reduction algorithm previously used to analyze mouse USVs and birdsongs. We have found that the receptive-field features obtained with the mouse USVs and those obtained with the pitch-shifted starling songs clustered together, as did the sparse-filtering features. However, the natural and artificial receptive-field features clustered mostly separately. These results indicate that composite receptive fields are likely a generic property of central auditory systems in different classes of animals. They further suggest that the quadratic receptive-field features of the mouse auditory cortical neurons are natural-stimulus invariant.


Author(s):  
Yan Zhenhao ◽  
Zhang Zongzhen ◽  
Tian Zhiyuan ◽  
Wang Jinrui ◽  
Bao Huaiqian ◽  
...  

2021 ◽  
Author(s):  
Benjamin B Bartelle ◽  
Mohammad Abbasi ◽  
Connor Sanderford ◽  
Narendian Raghu

We have developed representation learning methods, specifically to address the constraints and advantages of complex spatial data. Sparse filtering (SFt), uses principles of sparsity and mutual information to build representations from both global and local features from a minimal list of samples. Critically, the samples that comprise each representation are listed and ranked by informativeness. We used the Allen Mouse Brain Atlas gene expression data for prototyping and established performance metrics based on representation accuracy to labeled anatomy. SFt, implemented with the PyTorch machine learning libraries for Python, returned the most accurate reconstruction of anatomical ground truth of any method tested. SFt generated gene lists could be further compressed, retaining 95% of informativeness with only 580 genes. Finally, we build classifiers capable of parsing anatomy with >95% accuracy using only 10 derived genes. Sparse learning is a powerful, but underexplored means to derive biologically meaningful representations from complex datasets and a quantitative basis for compressed sensing of classifiable phenomena. SFt should be considered as an alternative to PCA or manifold learning for any high dimensional dataset and the basis for future spatial learning algorithms.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 1075
Author(s):  
Huaiqian Bao ◽  
Zhaoting Shi ◽  
Jinrui Wang ◽  
Zongzhen Zhang ◽  
Guowei Zhang

Fault diagnosis of mechanical equipment is mainly based on the contact measurement and analysis of vibration signals. In some special working conditions, the non-contact fault diagnosis method represented by the measurement of acoustic signals can make up for the lack of contact testing. However, its engineering application value is greatly restricted due to the low signal-to-noise ratio (SNR) of the acoustic signal. To solve this deficiency, a novel fault diagnosis method based on the generalized matrix norm sparse filtering (GMNSF) is proposed in this paper. Specially, the generalized matrix norm is introduced into the sparse filtering to seek the optimal sparse feature distribution to overcome the defect of low SNR of acoustic signals. Firstly, the collected acoustic signals are randomly overlapped to form the sample fragment data set. Then, three constraints are imposed on the multi-period data set by the GMNSF model to extract the sparse features in the sample. Finally, softmax is used to as a classifier to categorize different fault types. The diagnostic performance of the proposed method is verified by the bearing and planetary gear datasets. Results show that the GMNSF model has good feature extraction ability performance and anti-noise ability than other traditional methods.


2021 ◽  
Author(s):  
Shanshan Ji ◽  
Baokun Han ◽  
Zongzhen Zhang ◽  
Jinrui Wang ◽  
Bo Lu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document