sparse representations
Recently Published Documents


TOTAL DOCUMENTS

534
(FIVE YEARS 74)

H-INDEX

45
(FIVE YEARS 4)

2021 ◽  
pp. 107958
Author(s):  
Ting Liu ◽  
Hongzhong Tang ◽  
Dongbo Zhang ◽  
Shuying Zeng ◽  
Biao Luo ◽  
...  

Author(s):  
Tao Qian

Sparse (fast) representations of deterministic signals have been well studied. Among other types there exists one called adaptive Fourier decomposition (AFD) for functions in analytic Hardy spaces. Through the Hardy space decomposition of the $L^2$ space the AFD algorithm also gives rise to sparse representations of signals of finite energy. To deal with multivariate signals the general Hilbert space context comes into play. The multivariate counterpart of AFD in general Hilbert spaces with a dictionary has been named pre-orthogonal AFD (POAFD). In the present study we generalize AFD and POAFD to random analytic signals through formulating stochastic analytic Hardy spaces and stochastic Hilbert spaces. To analyze random analytic signals we work on two models, both being called stochastic AFD, or SAFD in brief. The two models are respectively made for (i) those expressible as the sum of a deterministic signal and an error term (SAFDI); and for (ii) those from different sources obeying certain distributive law (SAFDII). In the later part of the paper we drop off the analyticity assumption and generalize the SAFDI and SAFDII to what we call stochastic Hilbert spaces with a dictionary. The generalized methods are named as stochastic pre-orthogonal adaptive Fourier decompositions, SPOAFDI and SPOAFDII. Like AFDs and POAFDs for deterministic signals, the developed stochastic POAFD algorithms offer powerful tools to approximate and thus to analyze random signals.


2021 ◽  
Author(s):  
Alexander G. Ororbia

In this article, we propose a novel form of unsupervised learning that we call continual competitive memory (CCM) as well as a simple framework to unify related neural models that operate under the principles of competition. The resulting neural system, which takes inspiration from adaptive resonance theory, is shown to offer a rather simple yet effective approach for combating catastrophic forgetting in continual classification problems. We compare our approach to several other forms of competitive learning and find that: 1) competitive learning, in general, offers a promising pathway towards acquiring sparse representations that reduce neural cross-talk, and, 2) our proposed variant, the CCM, which is designed with task streams in mind, is needed to prevent the overriding of old information. CCM yields promising results on continual learning benchmarks including Split MNIST and Split NotMNIST.


2021 ◽  
pp. 108185
Author(s):  
Thanh T. Nguyen ◽  
Charles Soussen ◽  
Jérôme Idier ◽  
El-Hadi Djermoune

2021 ◽  
Author(s):  
Blaž Škrlj ◽  
Matej Martinc ◽  
Nada Lavrač ◽  
Senja Pollak

AbstractLearning from texts has been widely adopted throughout industry and science. While state-of-the-art neural language models have shown very promising results for text classification, they are expensive to (pre-)train, require large amounts of data and tuning of hundreds of millions or more parameters. This paper explores how automatically evolved text representations can serve as a basis for explainable, low-resource branch of models with competitive performance that are subject to automated hyperparameter tuning. We present autoBOT (automatic Bags-Of-Tokens), an autoML approach suitable for low resource learning scenarios, where both the hardware and the amount of data required for training are limited. The proposed approach consists of an evolutionary algorithm that jointly optimizes various sparse representations of a given text (including word, subword, POS tag, keyword-based, knowledge graph-based and relational features) and two types of document embeddings (non-sparse representations). The key idea of autoBOT is that, instead of evolving at the learner level, evolution is conducted at the representation level. The proposed method offers competitive classification performance on fourteen real-world classification tasks when compared against a competitive autoML approach that evolves ensemble models, as well as state-of-the-art neural language models such as BERT and RoBERTa. Moreover, the approach is explainable, as the importance of the parts of the input space is part of the final solution yielded by the proposed optimization procedure, offering potential for meta-transfer learning.


Geophysics ◽  
2021 ◽  
pp. 1-129
Author(s):  
Lingqian Wang ◽  
Hui Zhou ◽  
Wenling Liu ◽  
Bo Yu ◽  
Sheng Zhang

Seismic acoustic impedance inversion plays an important role in subsurface quantitative interpretation. Due to the band-limited property of the seismic record and the discretization of the continuous elastic parameters with a limited sampling interval, the inverse problem suffers from serious ill-posedness. Various regularization methods are introduced into the seismic inversion to make the inversion results comply with the pre-specified characteristics. However, conventional seismic inversion methods can only reflect fixed distribution characteristics and do not take into account discretization challenges. We propose a new post-stack seismic impedance inversion method with upsampling and adaptive regularization. The adaptive regularization is constructed with two trained dictionaries from the true model and upsampled model-based inversion result to capture the features of high- and low-resolution details, and a sparsity-based statistical model is proposed to build the relationship between their sparse representations. The high-resolution components can be recovered based on the prediction model and low-resolution sparse representations, and the parameters of the statistical prediction model can be obtained effectively with conventional optimization algorithms. The synthetic and field data tests show that the model-based inversion is dependent on the sample interval, and the proposed method can reveal more thin layers and enhance the extension of the strata compared with conventional inversion methods. Moreover, the inverted impedance variance of the proposed method well matches borehole observations. The tests demonstrate the interpolated model-based inversion result combined with the sparsity-based prediction model can effectively improve the resolution and accuracy of the inversion results.


Sign in / Sign up

Export Citation Format

Share Document