sparsity structure
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 14)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 53 (4) ◽  
pp. 1115-1148
Author(s):  
Nicolas Meyer ◽  
Olivier Wintenberger

AbstractRegular variation provides a convenient theoretical framework for studying large events. In the multivariate setting, the spectral measure characterizes the dependence structure of the extremes. This measure gathers information on the localization of extreme events and often has sparse support since severe events do not simultaneously occur in all directions. However, it is defined through weak convergence, which does not provide a natural way to capture this sparsity structure. In this paper, we introduce the notion of sparse regular variation, which makes it possible to better learn the dependence structure of extreme events. This concept is based on the Euclidean projection onto the simplex, for which efficient algorithms are known. We prove that under mild assumptions sparse regular variation and regular variation are equivalent notions, and we establish several results for sparsely regularly varying random vectors.


Author(s):  
Dewei Zhang ◽  
Yin Liu ◽  
Sam Davanloo Tajbakhsh

In many statistical learning problems, it is desired that the optimal solution conform to an a priori known sparsity structure represented by a directed acyclic graph. Inducing such structures by means of convex regularizers requires nonsmooth penalty functions that exploit group overlapping. Our study focuses on evaluating the proximal operator of the latent overlapping group lasso developed by Jacob et al. in 2009. We implemented an alternating direction method of multiplier with a sharing scheme to solve large-scale instances of the underlying optimization problem efficiently. In the absence of strong convexity, global linear convergence of the algorithm is established using the error bound theory. More specifically, the paper contributes to establishing primal and dual error bounds when the nonsmooth component in the objective function does not have a polyhedral epigraph. We also investigate the effect of the graph structure on the speed of convergence of the algorithm. Detailed numerical simulation studies over different graph structures supporting the proposed algorithm and two applications in learning are provided. Summary of Contribution: The paper proposes a computationally efficient optimization algorithm to evaluate the proximal operator of a nonsmooth hierarchical sparsity-inducing regularizer and establishes its convergence properties. The computationally intensive subproblem of the proposed algorithm can be fully parallelized, which allows solving large-scale instances of the underlying problem. Comprehensive numerical simulation studies benchmarking the proposed algorithm against five other methods on the speed of convergence to optimality are provided. Furthermore, performance of the algorithm is demonstrated on two statistical learning applications related to topic modeling and breast cancer classification. The code along with the simulation studies and benchmarks are available on the corresponding author’s GitHub website for evaluation and future use.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1249
Author(s):  
Jinwon Heo ◽  
Jangsun Baek

Along with advances in technology, matrix data, such as medical/industrial images, have emerged in many practical fields. These data usually have high dimensions and are not easy to cluster due to their intrinsic correlated structure among rows and columns. Most approaches convert matrix data to multi dimensional vectors and apply conventional clustering methods to them, and thus, suffer from an extreme high-dimensionality problem as well as a lack of interpretability of the correlated structure among row/column variables. Recently, a regularized model was proposed for clustering matrix-valued data by imposing a sparsity structure for the mean signal of each cluster. We extend their approach by regularizing further on the covariance to cope better with the curse of dimensionality for large size images. A penalized matrix normal mixture model with lasso-type penalty terms in both mean and covariance matrices is proposed, and then an expectation maximization algorithm is developed to estimate the parameters. The proposed method has the competence of both parsimonious modeling and reflecting the proper conditional correlation structure. The estimators are consistent, and their limiting distributions are derived. We applied the proposed method to simulated data as well as real datasets and measured its clustering performance with the clustering accuracy (ACC) and the adjusted rand index (ARI). The experiment results show that the proposed method performed better with higher ACC and ARI than those of conventional methods.


Author(s):  
Zhenzhen Li ◽  
Xiujuan Zhang ◽  
Chengui Xiao ◽  
Da Chen ◽  
Shushi Huang ◽  
...  

To overcome the low efficiency of conventional confocal Raman spectroscopy, many efforts have been devoted to parallelizing the Raman excitation and acquisition, in which the scattering from multiple foci is projected onto different locations on a spectrometer’s CCD, along either its vertical, horizontal dimension, or even both. While the latter projection scheme relieves the limitation on the row numbers of the CCD, the spectra of multiple foci are recorded in one spectral channel, resulting in spectral overlapping. Here, we developed a method under a compressive sensing framework to demultiplex the superimposed spectra of multiple cells during their dynamic processes. Unlike the previous methods which ignore the information connection between the spectra of the cells recorded at different time, the proposed method utilizes a prior that a cell’s spectra acquired at different time have the same sparsity structure in their principal components. Rather than independently demultiplexing the mixed spectra at the individual time intervals, the method demultiplexes the whole spectral sequence acquired continuously during the dynamic process. By penalizing the sparsity combined from all time intervals, the collaborative optimization of the inversion problem gave more accurate recovery results. The performances of the method were substantiated by a 1D Raman tweezers array, which monitored the germination of multiple bacterial spores. The method can be extended to the monitoring of many living cells randomly scattering on a coverslip, and has a potential to improve the throughput by a few orders.


2021 ◽  
Vol 12 (3) ◽  
pp. 140-165
Author(s):  
Mahdi Khosravy ◽  
Thales Wulfert Cabral ◽  
Max Mateus Luiz ◽  
Neeraj Gupta ◽  
Ruben Gonzalez Crespo

Compressive sensing has the ability of reconstruction of signal/image from the compressive measurements which are sensed with a much lower number of samples than a minimum requirement by Nyquist sampling theorem. The random acquisition is widely suggested and used for compressive sensing. In the random acquisition, the randomness of the sparsity structure has been deployed for compressive sampling of the signal/image. The article goes through all the literature up to date and collects the main methods, and simply described the way each of them randomly applies the compressive sensing. This article is a comprehensive review of random acquisition techniques in compressive sensing. Theses techniques have reviews under the main categories of (1) random demodulator, (2) random convolution, (3) modulated wideband converter model, (4) compressive multiplexer diagram, (5) random equivalent sampling, (6) random modulation pre-integration, (7) quadrature analog-to-information converter, (8) randomly triggered modulated-wideband compressive sensing (RT-MWCS).


2021 ◽  
Vol 17 (3) ◽  
pp. 349-356
Author(s):  
Iman Al Fajri ◽  
Hendra Mesra ◽  
Jeffry Kusuma

This paper presents a derivation of the Runge-Kutta or fourth method with six stages suitable for parallel implementation. Development of a parallel model based on the sparsity structure of the fourth type Runge-Kutta which is divided into three processors. The calculation of the parallel computation model and the sequential model from the accurate side shows that the sequential model is better. However, generally, the parallel method will end the analytic solution by increasing the number of iterations. In terms of execution time, parallel method has advantages over sequential method.


2021 ◽  
Vol 47 (3) ◽  
Author(s):  
Timon S. Gutleb

AbstractWe present a sparse spectral method for nonlinear integro-differential Volterra equations based on the Volterra operator’s banded sparsity structure when acting on specific Jacobi polynomial bases. The method is not restricted to convolution-type kernels of the form K(x, y) = K(x − y) but instead works for general kernels at competitive speeds and with exponential convergence. We provide various numerical experiments based on an open-source implementation for problems with and without known analytic solutions and comparisons with other methods.


2021 ◽  
Vol 31 (3) ◽  
Author(s):  
Joris Bierkens ◽  
Sebastiano Grazzi ◽  
Frank van der Meulen ◽  
Moritz Schauer

AbstractWe introduce the use of the Zig-Zag sampler to the problem of sampling conditional diffusion processes (diffusion bridges). The Zig-Zag sampler is a rejection-free sampling scheme based on a non-reversible continuous piecewise deterministic Markov process. Similar to the Lévy–Ciesielski construction of a Brownian motion, we expand the diffusion path in a truncated Faber–Schauder basis. The coefficients within the basis are sampled using a Zig-Zag sampler. A key innovation is the use of the fully local algorithm for the Zig-Zag sampler that allows to exploit the sparsity structure implied by the dependency graph of the coefficients and by the subsampling technique to reduce the complexity of the algorithm. We illustrate the performance of the proposed methods in a number of examples.


2020 ◽  
pp. 1-29
Author(s):  
Le Chang ◽  
Yanlin Shi

Abstract This paper investigates a high-dimensional vector-autoregressive (VAR) model in mortality modeling and forecasting. We propose an extension of the sparse VAR (SVAR) model fitted on the log-mortality improvements, which we name “spatially penalized smoothed VAR” (SSVAR). By adaptively penalizing the coefficients based on the distances between ages, SSVAR not only allows a flexible data-driven sparsity structure of the coefficient matrix but simultaneously ensures interpretable coefficients including cohort effects. Moreover, by incorporating the smoothness penalties, divergence in forecast mortality rates of neighboring ages is largely reduced, compared with the existing SVAR model. A novel estimation approach that uses the accelerated proximal gradient algorithm is proposed to solve SSVAR efficiently. Similarly, we propose estimating the precision matrix of the residuals using a spatially penalized graphical Lasso to further study the dependency structure of the residuals. Using the UK and France population data, we demonstrate that the SSVAR model consistently outperforms the famous Lee–Carter, Hyndman–Ullah, and two VAR-type models in forecasting accuracy. Finally, we discuss the extension of the SSVAR model to multi-population mortality forecasting with an illustrative example that demonstrates its superiority in forecasting over existing approaches.


Sign in / Sign up

Export Citation Format

Share Document