Dictionary learning enhancement framework: Learning a non-linear mapping model to enhance discriminative dictionary learning methods

2019 ◽  
Vol 357 ◽  
pp. 135-150 ◽  
Author(s):  
Arash Abdi ◽  
Mohammad Rahmati ◽  
Mohammad M. Ebadzadeh
2018 ◽  
Vol 12 (4) ◽  
pp. 241-259
Author(s):  
Avik Chakraborti ◽  
Nilanjan Datta ◽  
Mridul Nandi

Abstract A block is an n-bit string, and a (possibly keyed) block-function is a non-linear mapping that maps one block to another, e.g., a block-cipher. In this paper, we consider various symmetric key primitives with {\ell} block inputs and raise the following question: what is the minimum number of block-function invocations required for a mode to be secure? We begin with encryption modes that generate {\ell^{\prime}} block outputs and show that at least {(\ell+\ell^{\prime}-1)} block-function invocations are necessary to achieve the PRF security. In presence of a nonce, the requirement of block-functions reduces to {\ell^{\prime}} blocks only. If {\ell=\ell^{\prime}} , in order to achieve SPRP security, the mode requires at least {2\ell} many block-function invocations. We next consider length preserving r-block (called chunk) online encryption modes and show that, to achieve online PRP security, each chunk should have at least {2r-1} many and overall at least {2r\ell-1} many block-functions for {\ell} many chunks. Moreover, we show that it can achieve online SPRP security if each chunk contains at least {2r} non-linear block-functions. We next analyze affine MAC modes and show that an integrity-secure affine MAC mode requires at least {\ell} many block-function invocations to process an {\ell} block message. Finally, we consider affine mode authenticated encryption and show that in order to achieve INT-RUP security or integrity security under a nonce-misuse scenario, either (i) the number of non-linear block-functions required to generate the ciphertext is more than {\ell} or (ii) the number of extra non-linear block-functions required to generate the tag depends on {\ell} .


Author(s):  
Diana Mateus ◽  
Christian Wachinger ◽  
Selen Atasoy ◽  
Loren Schwarz ◽  
Nassir Navab

Computer aided diagnosis is often confronted with processing and analyzing high dimensional data. One alternative to deal with such data is dimensionality reduction. This chapter focuses on manifold learning methods to create low dimensional data representations adapted to a given application. From pairwise non-linear relations between neighboring data-points, manifold learning algorithms first approximate the low dimensional manifold where data lives with a graph; then, they find a non-linear map to embed this graph into a low dimensional space. Since the explicit pairwise relations and the neighborhood system can be designed according to the application, manifold learning methods are very flexible and allow easy incorporation of domain knowledge. The authors describe different assumptions and design elements that are crucial to building successful low dimensional data representations with manifold learning for a variety of applications. In particular, they discuss examples for visualization, clustering, classification, registration, and human-motion modeling.


2019 ◽  
Vol 11 (7) ◽  
pp. 769 ◽  
Author(s):  
Huiping Lin ◽  
Hang Chen ◽  
Hongmiao Wang ◽  
Junjun Yin ◽  
Jian Yang

Ship detection with polarimetric synthetic aperture radar (PolSAR) has received increasing attention for its wide usage in maritime applications. However, extracting discriminative features to implement ship detection is still a challenging problem. In this paper, we propose a novel ship detection method for PolSAR images via task-driven discriminative dictionary learning (TDDDL). An assumption that ship and clutter information are sparsely coded under two separate dictionaries is made. Contextual information is considered by imposing superpixel-level joint sparsity constraints. In order to amplify the discrimination of the ship and clutter, we impose incoherence constraints between the two sub-dictionaries in the objective of feature coding. The discriminative dictionary is trained jointly with a linear classifier in task-driven dictionary learning (TDDL) framework. Based on the learnt dictionary and classifier, we extract discriminative features by sparse coding, and obtain robust detection results through binary classification. Different from previous methods, our ship detection cue is obtained through active learning strategies rather than artificially designed rules, and thus, is more adaptive, effective and robust. Experiments performed on synthetic images and two RADARSAT-2 images demonstrate that our method outperforms other comparative methods. In addition, the proposed method yields better shape-preserving ability and lower computation cost.


The Analyst ◽  
1994 ◽  
Vol 119 (5) ◽  
pp. 971 ◽  
Author(s):  
Boris Treiger ◽  
Igor Bondarenko ◽  
Piet Van Espen ◽  
Ren� Van Grieken ◽  
Fred Adams

Sign in / Sign up

Export Citation Format

Share Document