sparse approximations
Recently Published Documents


TOTAL DOCUMENTS

70
(FIVE YEARS 3)

H-INDEX

14
(FIVE YEARS 0)

2021 ◽  
Vol 11 ◽  
Author(s):  
Sokratis Makrogiannis ◽  
Keni Zheng ◽  
Chelsea Harris

The most common form of cancer among women in both developed and developing countries is breast cancer. The early detection and diagnosis of this disease is significant because it may reduce the number of deaths caused by breast cancer and improve the quality of life of those effected. Computer-aided detection (CADe) and computer-aided diagnosis (CADx) methods have shown promise in recent years for aiding in the human expert reading analysis and improving the accuracy and reproducibility of pathology results. One significant application of CADe and CADx is for breast cancer screening using mammograms. In image processing and machine learning research, relevant results have been produced by sparse analysis methods to represent and recognize imaging patterns. However, application of sparse analysis techniques to the biomedical field is challenging, as the objects of interest may be obscured because of contrast limitations or background tissues, and their appearance may change because of anatomical variability. We introduce methods for label-specific and label-consistent dictionary learning to improve the separation of benign breast masses from malignant breast masses in mammograms. We integrated these approaches into our Spatially Localized Ensemble Sparse Analysis (SLESA) methodology. We performed 10- and 30-fold cross validation (CV) experiments on multiple mammography datasets to measure the classification performance of our methodology and compared it to deep learning models and conventional sparse representation. Results from these experiments show the potential of this methodology for separation of malignant from benign masses as a part of a breast cancer screening workflow.


Author(s):  
Héctor Andrade-Loarca ◽  
Gitta Kutyniok ◽  
Ozan Öktem

Semantic edge detection has recently gained a lot of attention as an image-processing task, mainly because of its wide range of real-world applications. This is based on the fact that edges in images contain most of the semantic information. Semantic edge detection involves two tasks, namely pure edge detection and edge classification. Those are in fact fundamentally distinct in terms of the level of abstraction that each task requires. This fact is known as the distracted supervision paradox and limits the possible performance of a supervised model in semantic edge detection. In this work, we will present a novel hybrid method that is based on a combination of the model-based concept of shearlets, which provides probably optimally sparse approximations of a model class of images, and the data-driven method of a suitably designed convolutional neural network. We show that it avoids the distracted supervision paradox and achieves high performance in semantic edge detection. In addition, our approach requires significantly fewer parameters than a pure data-driven approach. Finally, we present several applications such as tomographic reconstruction and show that our approach significantly outperforms former methods, thereby also indicating the value of such hybrid methods for biomedical imaging.


Author(s):  
Kersten Schuster ◽  
Philip Trettner ◽  
Leif Kobbelt

We present a numerical optimization method to find highly efficient (sparse) approximations for convolutional image filters. Using a modified parallel tempering approach, we solve a constrained optimization that maximizes approximation quality while strictly staying within a user-prescribed performance budget. The results are multi-pass filters where each pass computes a weighted sum of bilinearly interpolated sparse image samples, exploiting hardware acceleration on the GPU. We systematically decompose the target filter into a series of sparse convolutions, trying to find good trade-offs between approximation quality and performance. Since our sparse filters are linear and translation-invariant, they do not exhibit the aliasing and temporal coherence issues that often appear in filters working on image pyramids. We show several applications, ranging from simple Gaussian or box blurs to the emulation of sophisticated Bokeh effects with user-provided masks. Our filters achieve high performance as well as high quality, often providing significant speed-up at acceptable quality even for separable filters. The optimized filters can be baked into shaders and used as a drop-in replacement for filtering tasks in image processing or rendering pipelines.


2018 ◽  
Vol 885 ◽  
pp. 18-31 ◽  
Author(s):  
Paul Gardner ◽  
Timothy J. Rogers ◽  
Charles Lord ◽  
Rob J. Barthorpe

Efficient surrogate modelling of computer models (herein defined as simulators) becomes of increasing importance as more complex simulators and non-deterministic methods, such as Monte Carlo simulations, are utilised. This is especially true in large multidimensional design spaces. In order for these technologies to be feasible in an early design stage context, the surrogate model (oremulator) must create an accurate prediction of the simulator in the proposed design space. Gaussian Processes (GPs) are a powerful non-parametric Bayesian approach that can be used as emulators. The probabilistic framework means that predictive distributions are inferred, providing an understanding of the uncertainty introduced by replacing the simulator with an emulator, known as code uncertainty. An issue with GPs is that they have a computational complexity of O(N3) (where N is the number of data points), which can be reduced to O(NM2) by using various sparse approximations, calculated from a subset of inducing points (where M is the number of inducing points). This paper explores the use of sparse Gaussian process emulators as a computationally efficient method for creating surrogate models of structural dynamics simulators. Discussions on the performance of these methods are presented along with comments regarding key applications to the early design stage.


2017 ◽  
Vol 141 ◽  
pp. 96-108 ◽  
Author(s):  
Vladimir Katkovnik ◽  
Mykola Ponomarenko ◽  
Karen Egiazarian

2017 ◽  
Vol 45 (1) ◽  
pp. 194-216 ◽  
Author(s):  
Lassi Roininen ◽  
Sari Lasanen ◽  
Mikko Orispää ◽  
Simo Särkkä

2017 ◽  
Vol 50 (1) ◽  
pp. 14010-14015 ◽  
Author(s):  
Christopher J. Quinn ◽  
Ali Pinar ◽  
Jing Gao ◽  
Lu Su

2017 ◽  
Vol 15 (03) ◽  
pp. 433-455 ◽  
Author(s):  
Zheng-Chu Guo ◽  
Dao-Hong Xiang ◽  
Xin Guo ◽  
Ding-Xuan Zhou

Spectral algorithms form a general framework that unifies many regularization schemes in learning theory. In this paper, we propose and analyze a class of thresholded spectral algorithms that are designed based on empirical features. Soft thresholding is adopted to achieve sparse approximations. Our analysis shows that without sparsity assumption of the regression function, the output functions of thresholded spectral algorithms are represented by empirical features with satisfactory sparsity, and the convergence rates are comparable to those of the classical spectral algorithms in the literature.


Sign in / Sign up

Export Citation Format

Share Document