scholarly journals Representation and coding of signal geometry

2017 ◽  
Vol 6 (4) ◽  
pp. 349-388 ◽  
Author(s):  
Petros T Boufounos ◽  
Shantanu Rane ◽  
Hassan Mansour

Abstract Approaches to signal representation and coding theory have traditionally focused on how to best represent signals using parsimonious representations that incur the lowest possible distortion. Classical examples include linear and nonlinear approximations, sparse representations and rate-distortion theory. Very often, however, the goal of processing is to extract specific information from the signal, and the distortion should be measured on the extracted information. The corresponding representation should, therefore, represent that information as parsimoniously as possible, without necessarily accurately representing the signal itself. In this article, we examine the problem of encoding signals such that sufficient information is preserved about their pairwise distances and their inner products. For that goal, we consider randomized embeddings as an encoding mechanism and provide a framework to analyze their performance. We also demonstrate that it is possible to design the embedding such that it represents different ranges of distances with different precision. These embeddings also allow the computation of kernel inner products with control on their inner product-preserving properties. Our results provide a broad framework to design and analyze embeddings and generalize existing results in this area, such as random Fourier kernels and universal embeddings.

2015 ◽  
Vol 13 (1) ◽  
Author(s):  
Augustyn Markiewicz ◽  
Simo Puntanen

Abstract For an n x m real matrix A the matrix A⊥ is defined as a matrix spanning the orthocomplement of the column space of A, when the orthogonality is defined with respect to the standard inner product ⟨x, y⟩ = x'y. In this paper we collect together various properties of the ⊥ operation and its applications in linear statistical models. Results covering the more general inner products are also considered. We also provide a rather extensive list of references


2021 ◽  
pp. 30-35
Author(s):  
Vadim Gribunin ◽  
◽  
Andrey Timonov ◽  

Purpose of the article: optimization of the choice of information security tools in a multi-level automated system, taking into account higher levels, quality indicators of information security tools, as well as the general financial budget. Demonstration of analogies of solving these problems with known problems from communication theory. Research method: optimal choice of information security tools based on risk analysis and the Lagrange multiplier method; Optimal bit budget allocation based on the Waterfilling optimization algorithm. Optimal placement of information security tools in a multilevel automated system based on bisectional search. Obtained result: the article shows analogies between some problems of communication theory and the optimal choice of information security tools. The well-known problem of the optimal choice of information security tools is solved using the rate-distortion theory, the well-known problem of the optimal budget allocation for their purchase is solved by analogy with the problem of distributing the power of transmitters. For the first time, the problem posed for the optimal placement of information security tools in a multilevel automated system was solved by analogy with the problem of distributing the total bit budget between quantizers.


1997 ◽  
Vol 20 (2) ◽  
pp. 219-224
Author(s):  
Shih-Sen Chang ◽  
Yu-Qing Chen ◽  
Byung Soo Lee

The purpose of this paper is to introduce the concept of semi-inner products in locally convex spaces and to give some basic properties.


2020 ◽  
Vol 2 (1) ◽  
Author(s):  
Kunal Kathuria ◽  
Aakrosh Ratan ◽  
Michael McConnell ◽  
Stefan Bekiranov

Abstract Motivated by the problem of classifying individuals with a disease versus controls using a functional genomic attribute as input, we present relatively efficient general purpose inner product–based kernel classifiers to classify the test as a normal or disease sample. We encode each training sample as a string of 1 s (presence) and 0 s (absence) representing the attribute’s existence across ordered physical blocks of the subdivided genome. Having binary-valued features allows for highly efficient data encoding in the computational basis for classifiers relying on binary operations. Given that a natural distance between binary strings is Hamming distance, which shares properties with bit-string inner products, our two classifiers apply different inner product measures for classification. The active inner product (AIP) is a direct dot product–based classifier whereas the symmetric inner product (SIP) classifies upon scoring correspondingly matching genomic attributes. SIP is a strongly Hamming distance–based classifier generally applicable to binary attribute-matching problems whereas AIP has general applications as a simple dot product–based classifier. The classifiers implement an inner product between N = 2n dimension test and train vectors using n Fredkin gates while the training sets are respectively entangled with the class-label qubit, without use of an ancilla. Moreover, each training class can be composed of an arbitrary number m of samples that can be classically summed into one input string to effectively execute all test–train inner products simultaneously. Thus, our circuits require the same number of qubits for any number of training samples and are $O(\log {N})$ O ( log N ) in gate complexity after the states are prepared. Our classifiers were implemented on ibmqx2 (IBM-Q-team 2019b) and ibmq_16_melbourne (IBM-Q-team 2019a). The latter allowed encoding of 64 training features across the genome.


Sign in / Sign up

Export Citation Format

Share Document