A Priority-Based Weighted Inner Products Matching Coarsening Algorithm on Multilevel Hypergraph Partitioning

2013 ◽  
Vol 717 ◽  
pp. 466-474
Author(s):  
Yao Yuan Zeng ◽  
Wen Tao Zhao ◽  
Zheng Hua Wang

Multilevel hypergraph partitioning is an significant and extensively researched problem in combinatorial optimization. Nevertheless, as the primary component of multilevel hypergraph partitioning, coarsening phase has not yet attracted sufficient attention. Meanwhile, the performance of coarsening algorithm is not very satisfying. In this paper, we present a new coarsening algorithm based on multilevel framework to reduce the number of vertices more rapidly. The main contribution is introducing the matching mechanism of weighted inner product and establishing two priority rules of vertices. Finally, the effectiveness of our coarsening algorithm was indicated by experimental results compared with those produced by the combination of different sort algorithms and hMETIS in most of the ISPD98 benchmark suite.

2013 ◽  
Vol 753-755 ◽  
pp. 2908-2911
Author(s):  
Yao Yuan Zeng ◽  
Wen Tao Zhao ◽  
Zheng Hua Wang

Multilevel hypergraph partitioning is a significant and extensively researched problem in combinatorial optimization. In this paper, we present a multilevel hypergraph partitioning algorithm based on simulated annealing approach for global optimization. Experiments on the benchmark suite of several unstructured meshes show that, for 2-, 4-, 8-, 16-and 32-way partitioning, although more running time was demanded, the quality of partition produced by our algorithm are on the average 14% and the maximum 22% better than those produced by partitioning software hMETIS in term of the SOED metric.


2015 ◽  
Vol 13 (1) ◽  
Author(s):  
Augustyn Markiewicz ◽  
Simo Puntanen

Abstract For an n x m real matrix A the matrix A⊥ is defined as a matrix spanning the orthocomplement of the column space of A, when the orthogonality is defined with respect to the standard inner product ⟨x, y⟩ = x'y. In this paper we collect together various properties of the ⊥ operation and its applications in linear statistical models. Results covering the more general inner products are also considered. We also provide a rather extensive list of references


Author(s):  
Jun Zhou ◽  
Longfei Li ◽  
Ziqi Liu ◽  
Chaochao Chen

Recently, Factorization Machine (FM) has become more and more popular for recommendation systems due to its effectiveness in finding informative interactions between features. Usually, the weights for the interactions are learned as a low rank weight matrix, which is formulated as an inner product of two low rank matrices. This low rank matrix can help improve the generalization ability of Factorization Machine. However, to choose the rank properly, it usually needs to run the algorithm for many times using different ranks, which clearly is inefficient for some large-scale datasets. To alleviate this issue, we propose an Adaptive Boosting framework of Factorization Machine (AdaFM), which can adaptively search for proper ranks for different datasets without re-training. Instead of using a fixed rank for FM, the proposed algorithm will gradually increase its rank according to its performance until the performance does not grow. Extensive experiments are conducted to validate the proposed method on multiple large-scale datasets. The experimental results demonstrate that the proposed method can be more effective than the state-of-the-art Factorization Machines.


1997 ◽  
Vol 20 (2) ◽  
pp. 219-224
Author(s):  
Shih-Sen Chang ◽  
Yu-Qing Chen ◽  
Byung Soo Lee

The purpose of this paper is to introduce the concept of semi-inner products in locally convex spaces and to give some basic properties.


2020 ◽  
Vol 2 (1) ◽  
Author(s):  
Kunal Kathuria ◽  
Aakrosh Ratan ◽  
Michael McConnell ◽  
Stefan Bekiranov

Abstract Motivated by the problem of classifying individuals with a disease versus controls using a functional genomic attribute as input, we present relatively efficient general purpose inner product–based kernel classifiers to classify the test as a normal or disease sample. We encode each training sample as a string of 1 s (presence) and 0 s (absence) representing the attribute’s existence across ordered physical blocks of the subdivided genome. Having binary-valued features allows for highly efficient data encoding in the computational basis for classifiers relying on binary operations. Given that a natural distance between binary strings is Hamming distance, which shares properties with bit-string inner products, our two classifiers apply different inner product measures for classification. The active inner product (AIP) is a direct dot product–based classifier whereas the symmetric inner product (SIP) classifies upon scoring correspondingly matching genomic attributes. SIP is a strongly Hamming distance–based classifier generally applicable to binary attribute-matching problems whereas AIP has general applications as a simple dot product–based classifier. The classifiers implement an inner product between N = 2n dimension test and train vectors using n Fredkin gates while the training sets are respectively entangled with the class-label qubit, without use of an ancilla. Moreover, each training class can be composed of an arbitrary number m of samples that can be classically summed into one input string to effectively execute all test–train inner products simultaneously. Thus, our circuits require the same number of qubits for any number of training samples and are $O(\log {N})$ O ( log N ) in gate complexity after the states are prepared. Our classifiers were implemented on ibmqx2 (IBM-Q-team 2019b) and ibmq_16_melbourne (IBM-Q-team 2019a). The latter allowed encoding of 64 training features across the genome.


Author(s):  
Katherine Jones-Smith

Dyson analysed the low-energy excitations of a ferromagnet using a Hamiltonian that was non-Hermitian with respect to the standard inner product. This allowed for a facile rendering of these excitations (known as spin waves) as weakly interacting bosonic quasi-particles. More than 50 years later, we have the full denouement of the non-Hermitian quantum mechanics formalism at our disposal when considering Dyson’s work, both technically and contextually. Here, we recast Dyson’s work on ferromagnets explicitly in terms of two inner products, with respect to which the Hamiltonian is always self-adjoint, if not manifestly ‘Hermitian’. Then we extend his scheme to doped anti-ferromagnets described by the t – J model, with hopes of shedding light on the physics of high-temperature superconductivity.


2019 ◽  
Vol 17 (05) ◽  
pp. 1950043
Author(s):  
Panchi Li ◽  
Jiahui Guo ◽  
Bing Wang ◽  
Mengqi Hao

In this paper, we propose a quantum circuit for calculating the squared sum of the inner product of quantum states. The circuit is designed by the multi-qubits controlled-swapping gates, in which the initial state of each control qubit is [Formula: see text] and they are in the equilibrium superposition state after passing through some Hadamard gates. Then, according to the control rules, each basis state in the superposition state controls the corresponding quantum states pair to swap. Finally, the Hadamard gates are applied to the control qubits again, and the squared sum of the inner product of many pairs of quantum states can be obtained simultaneously by measuring only one control qubit. We investigate the application of this method in quantum images matching on a classical computer, and the experimental results verify the correctness of the proposed method.


2018 ◽  
Vol 16 (08) ◽  
pp. 1840007 ◽  
Author(s):  
Nicolas Melo De Oliveira ◽  
Ricardo Martins De Abreu Silva ◽  
Wilson Rosa De Oliveira

Representing a given problem as a QUBO problem implies the possibility of running it in a quantum computational environment (generic or specific). The well-known problem of looking for similar functions in biological structures, especially of proteins, is of great interest in the field of Bioinformatics. We give a QUBO formulation for CMO protein problem. Experimental results validate this approach as an alternative to classical methods via combinatorial optimization. For the accomplishment of such experiments, we use the qbsolv tool.


2007 ◽  
Vol 15 (2) ◽  
pp. 169-198 ◽  
Author(s):  
Dong-Il Seo ◽  
Byung-Ro Moon

In optimization problems, the contribution of a variable to fitness often depends on the states of other variables. This phenomenon is referred to as epistasis or linkage. In this paper, we show that a new theory of epistasis can be established on the basis of Shannon's information theory. From this, we derive a new epistasis measure called entropic epistasis and some theoretical results. We also provide experimental results verifying the measure and showing how it can be used for designing efficient evolutionary algorithms.


2017 ◽  
Vol 6 (4) ◽  
pp. 349-388 ◽  
Author(s):  
Petros T Boufounos ◽  
Shantanu Rane ◽  
Hassan Mansour

Abstract Approaches to signal representation and coding theory have traditionally focused on how to best represent signals using parsimonious representations that incur the lowest possible distortion. Classical examples include linear and nonlinear approximations, sparse representations and rate-distortion theory. Very often, however, the goal of processing is to extract specific information from the signal, and the distortion should be measured on the extracted information. The corresponding representation should, therefore, represent that information as parsimoniously as possible, without necessarily accurately representing the signal itself. In this article, we examine the problem of encoding signals such that sufficient information is preserved about their pairwise distances and their inner products. For that goal, we consider randomized embeddings as an encoding mechanism and provide a framework to analyze their performance. We also demonstrate that it is possible to design the embedding such that it represents different ranges of distances with different precision. These embeddings also allow the computation of kernel inner products with control on their inner product-preserving properties. Our results provide a broad framework to design and analyze embeddings and generalize existing results in this area, such as random Fourier kernels and universal embeddings.


Sign in / Sign up

Export Citation Format

Share Document