sparsity constraints
Recently Published Documents


TOTAL DOCUMENTS

226
(FIVE YEARS 52)

H-INDEX

31
(FIVE YEARS 4)

2021 ◽  
Vol 13 (24) ◽  
pp. 5126
Author(s):  
Xiaobin Wu ◽  
Hongsong Qu ◽  
Liangliang Zheng ◽  
Tan Gao ◽  
Ziyu Zhang

Stripe noise is a common condition that has a considerable impact on the quality of the images. Therefore, stripe noise removal (destriping) is a tremendously important step in image processing. Since the existing destriping models cause different degrees of ripple effects, in this paper a new model, based on total variation (TV) regularization, global low rank and directional sparsity constraints, is proposed for the removal of vertical stripes. TV regularization is used to preserve details, and the global low rank and directional sparsity are used to constrain stripe noise. The directional and structural characteristics of stripe noise are fully utilized to achieve a better removal effect. Moreover, we designed an alternating minimization scheme to obtain the optimal solution. Simulation and actual experimental data show that the proposed model has strong robustness and is superior to existing competitive destriping models, both subjectively and objectively.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Shuaiyang Zhang ◽  
Wenshen Hua ◽  
Gang Li ◽  
Jie Liu ◽  
Fuyu Huang ◽  
...  

Sparse unmixing has attracted widespread attention from researchers, and many effective unmixing algorithms have been proposed in recent years. However, most algorithms improve the unmixing accuracy at the cost of large calculations. Higher unmixing accuracy often leads to higher computational complexity. To solve this problem, we propose a novel double regression-based sparse unmixing model (DRSUM), which can obtain better unmixing results with lower computational complexity. DRSUM decomposes the complex objective function into two simple formulas and completes the unmixing process through two sparse regressions. The unmixing result of the first sparse regression is added as a constraint to the second. DRSUM is an open model, and we can add different constraints to improve the unmixing accuracy. In addition, we can perform appropriate preprocessing to further improve the unmixing results. Under this model, a specific algorithm called double regression-based sparse unmixing via K -means ( DRSU M K − means ) is proposed. The improved K -means clustering algorithm is first used for preprocessing, and then we impose single sparsity and joint sparsity (using l 2 , 0 norm to control the sparsity) constraints on the first and second sparse unmixing, respectively. To meet the sparsity requirement, we introduce the row-hard-threshold function to solve the l 2 , 0 norm directly. Then, DRSU M K − means can be efficiently solved under alternating direction method of multipliers (ADMM) framework. Simulated and real data experiments have proven the effectiveness of DRSU M K − means .


2021 ◽  
Vol 15 ◽  
Author(s):  
Ahana Gangopadhyay ◽  
Shantanu Chakrabartty

Growth-transform (GT) neurons and their population models allow for independent control over the spiking statistics and the transient population dynamics while optimizing a physically plausible distributed energy functional involving continuous-valued neural variables. In this paper we describe a backpropagation-less learning approach to train a network of spiking GT neurons by enforcing sparsity constraints on the overall network spiking activity. The key features of the model and the proposed learning framework are: (a) spike responses are generated as a result of constraint violation and hence can be viewed as Lagrangian parameters; (b) the optimal parameters for a given task can be learned using neurally relevant local learning rules and in an online manner; (c) the network optimizes itself to encode the solution with as few spikes as possible (sparsity); (d) the network optimizes itself to operate at a solution with the maximum dynamic range and away from saturation; and (e) the framework is flexible enough to incorporate additional structural and connectivity constraints on the network. As a result, the proposed formulation is attractive for designing neuromorphic tinyML systems that are constrained in energy, resources, and network structure. In this paper, we show how the approach could be used for unsupervised and supervised learning such that minimizing a training error is equivalent to minimizing the overall spiking activity across the network. We then build on this framework to implement three different multi-layer spiking network architectures with progressively increasing flexibility in training and consequently, sparsity. We demonstrate the applicability of the proposed algorithm for resource-efficient learning using a publicly available machine olfaction dataset with unique challenges like sensor drift and a wide range of stimulus concentrations. In all of these case studies we show that a GT network trained using the proposed learning approach is able to minimize the network-level spiking activity while producing classification accuracy that are comparable to standard approaches on the same dataset.


2021 ◽  
Author(s):  
Sudhir Kumar ◽  
Sudip Sharma

We introduce a supervised machine learning approach with sparsity constraints for phylogenomics, referred to as evolutionary sparse learning (ESL). ESL builds models with genomic loci-such as genes, proteins, genomic segments, and positions-as parameters. Using the Least Absolute Shrinkage and Selection Operator (LASSO), ESL selects only the most important genomic loci to explain a given phylogenetic hypothesis or presence/absence of a trait. ESL does not directly model conventional parameters such as rates of substitutions between nucleotides, rate variation among positions, and phylogeny branch lengths. Instead, ESL directly employs the concordance of variation across sequences in an alignment with the evolutionary hypothesis of interest. ESL provides a natural way to combine different molecular and non-molecular data types and incorporate biological and functional annotations of genomic loci directly in model building. We propose positional, gene, function, and hypothesis sparsity scores, illustrate their use through an example and suggest several applications of ESL. The ESL framework has the potential to drive the development of a new class of computational methods that will complement traditional approaches in evolutionary genomics. ESL's fast computational times and small memory footprint will also help democratize big data analytics and improve scientific rigor in phylogenomics.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4116
Author(s):  
Xiaozhen Ren ◽  
Yuying Jiang

Terahertz time domain spectroscopy imaging systems suffer from the problems of long image acquisition time and massive data processing. Reducing the sampling rate will lead to the degradation of the imaging reconstruction quality. To solve this issue, a novel terahertz imaging model, named the dual sparsity constraints terahertz image reconstruction model (DSC-THz), is proposed in this paper. DSC-THz fuses the sparsity constraints of the terahertz image in wavelet and gradient domains into the terahertz image reconstruction model. Differing from the conventional wavelet transform, we introduce a non-linear exponentiation transform into the shift invariant wavelet coefficients, which can amplify the significant coefficients and suppress the small ones. Simultaneously, the sparsity of the terahertz image in gradient domain is used to enhance the sparsity of the image, which has the advantage of edge preserving property. The split Bregman iteration scheme is utilized to tackle the optimization problem. By using the idea of separation of variables, the optimization problem is decomposed into subproblems to solve. Compared with the conventional single sparsity constraint terahertz image reconstruction model, the experiments verified that the proposed approach can achieve higher terahertz image reconstruction quality at low sampling rates.


2021 ◽  
Author(s):  
chunyan chu ◽  
Shengying Liu ◽  
Zhentao Liu ◽  
Chenyu Hu ◽  
Yuejin Zhao ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249624
Author(s):  
C. B. Scott ◽  
Eric Mjolsness

We define a new family of similarity and distance measures on graphs, and explore their theoretical properties in comparison to conventional distance metrics. These measures are defined by the solution(s) to an optimization problem which attempts find a map minimizing the discrepancy between two graph Laplacian exponential matrices, under norm-preserving and sparsity constraints. Variants of the distance metric are introduced to consider such optimized maps under sparsity constraints as well as fixed time-scaling between the two Laplacians. The objective function of this optimization is multimodal and has discontinuous slope, and is hence difficult for univariate optimizers to solve. We demonstrate a novel procedure for efficiently calculating these optima for two of our distance measure variants. We present numerical experiments demonstrating that (a) upper bounds of our distance metrics can be used to distinguish between lineages of related graphs; (b) our procedure is faster at finding the required optima, by as much as a factor of 103; and (c) the upper bounds satisfy the triangle inequality exactly under some assumptions and approximately under others. We also derive an upper bound for the distance between two graph products, in terms of the distance between the two pairs of factors. Additionally, we present several possible applications, including the construction of infinite “graph limits” by means of Cauchy sequences of graphs related to one another by our distance measure.


Author(s):  
Viraj Shah ◽  
Chinmay Hegde

AbstractWe consider the problem of reconstructing a signal from under-determined modulo observations (or measurements). This observation model is inspired by a relatively new imaging mechanism called modulo imaging, which can be used to extend the dynamic range of imaging systems; variations of this model have also been studied under the category of phase unwrapping. Signal reconstruction in the under-determined regime with modulo observations is a challenging ill-posed problem, and existing reconstruction methods cannot be used directly. In this paper, we propose a novel approach to solving the signal recovery problem under sparsity constraints for the special case to modulo folding limited to two periods. We show that given a sufficient number of measurements, our algorithm perfectly recovers the underlying signal. We also provide experiments validating our approach on toy signal and image data and demonstrate its promising performance.


2021 ◽  
Vol 18 (2) ◽  
pp. 304-316
Author(s):  
Di Wu ◽  
Yanghua Wang ◽  
Jingjie Cao ◽  
Nuno V da Silva ◽  
Gang Yao

Abstract Least-squares reverse-time migration (RTM) works with an inverse operation, rather than an adjoint operation in a conventional RTM, and thus produces an image with a higher resolution and more balanced amplitude than the conventional RTM image. However, least-squares RTM introduces two side effects: sidelobes around reflectors and high-wavenumber migration artifacts. These side effects are caused mainly by the limited bandwidth of seismic data, the limited coverage of receiver arrays and the inaccuracy of the modeling kernel. To mitigate these side effects and to further boost resolution, we employed two sparsity constraints in the least-squares inversion operation, namely the Cauchy and L1-norm constraints. For solving the Cauchy-constrained least-squares RTM, we used a preconditioned nonlinear conjugate-gradient method. For solving the L1-norm constrained least-squares RTM, we modified the iterative soft thresholding method. While adopting these two solution methods, the Cauchy-constrained least-squares RTM converged faster than the L1-norm constrained least-squares RTM. Application examples with synthetic data and laboratory modeling data demonstrated that the constrained least-squares RTM methods can mitigate the side effects and promote image resolution.


Sign in / Sign up

Export Citation Format

Share Document