embedded graph
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 10)

H-INDEX

7
(FIVE YEARS 3)

Symmetry ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1475
Author(s):  
Metrose Metsidik

Eulerian and bipartite graph is a dual symmetric concept in Graph theory. It is well-known that a plane graph is Eulerian if and only if its geometric dual is bipartite. In this paper, we generalize the well-known result to embedded graphs and partial duals of cellularly embedded graphs, and characterize Eulerian and even-face graph partial duals of a cellularly embedded graph by means of half-edge orientations of its medial graph.


Author(s):  
Yuzhao Chen ◽  
Yatao Bian ◽  
Xi Xiao ◽  
Yu Rong ◽  
Tingyang Xu ◽  
...  

Recently, the teacher-student knowledge distillation framework has demonstrated its potential in training Graph Neural Networks (GNNs). However, due to the difficulty of training over-parameterized GNN models, one may not easily obtain a satisfactory teacher model for distillation. Furthermore, the inefficient training process of teacher-student knowledge distillation also impedes its applications in GNN models. In this paper, we propose the first teacher-free knowledge distillation method for GNNs, termed GNN Self-Distillation (GNN-SD), that serves as a drop-in replacement of the standard training process. The method is built upon the proposed neighborhood discrepancy rate (NDR), which quantifies the non-smoothness of the embedded graph in an efficient way. Based on this metric, we propose the adaptive discrepancy retaining (ADR) regularizer to empower the transferability of knowledge that maintains high neighborhood discrepancy across GNN layers. We also summarize a generic GNN-SD framework that could be exploited to induce other distillation strategies. Experiments further prove the effectiveness and generalization of our approach, as it brings: 1) state-of-the-art GNN distillation performance with less training cost, 2) consistent and considerable performance enhancement for various popular backbones.


2021 ◽  
Vol 383 (1) ◽  
pp. 345-400
Author(s):  
Alexander Spies

AbstractWe define Poisson-geometric analogues of Kitaev’s lattice models. They are obtained from a Kitaev model on an embedded graph $$\Gamma $$ Γ by replacing its Hopf algebraic data with Poisson data for a Poisson-Lie group G. Each edge is assigned a copy of the Heisenberg double $${\mathcal {H}}(G)$$ H ( G ) . Each vertex (face) of $$\Gamma $$ Γ defines a Poisson action of G (of $$G^*$$ G ∗ ) on the product of these Heisenberg doubles. The actions for a vertex and adjacent face form a Poisson action of the double Poisson-Lie group D(G). We define Poisson counterparts of vertex and face operators and relate them via the Poisson bracket to the vector fields generating the actions of D(G). We construct an isomorphism of Poisson D(G)-spaces between this Poisson-geometrical Kitaev model and Fock and Rosly’s Poisson structure for the graph $$\Gamma $$ Γ and the Poisson-Lie group D(G). This decouples the latter and represents it as a product of Heisenberg doubles. It also relates the Poisson-geometrical Kitaev model to the symplectic structure on the moduli space of flat D(G)-bundles on an oriented surface with boundary constructed from $$\Gamma $$ Γ .


2021 ◽  
pp. 2150016
Author(s):  
Catherine Meusburger ◽  
Derek K. Wise

We generalize gauge theory on a graph so that the gauge group becomes a finite-dimensional ribbon Hopf algebra, the graph becomes a ribbon graph, and gauge-theoretic concepts such as connections, gauge transformations and observables are replaced by linearized analogs. Starting from physical considerations, we derive an axiomatic definition of Hopf algebra gauge theory, including locality conditions under which the theory for a general ribbon graph can be assembled from local data in the neighborhood of each vertex. For a vertex neighborhood with [Formula: see text] incoming edge ends, the algebra of non-commutative ‘functions’ of connections is dual to a two-sided twist deformation of the [Formula: see text]-fold tensor power of the gauge Hopf algebra. We show these algebras assemble to give an algebra of functions and gauge-invariant subalgebra of ‘observables’ that coincide with those obtained in the combinatorial quantization of Chern–Simons theory, thus providing an axiomatic derivation of the latter. We then discuss holonomy in a Hopf algebra gauge theory and show that for semisimple Hopf algebras this gives, for each path in the embedded graph, a map from connections into the gauge Hopf algebra, depending functorially on the path. Curvatures — holonomies around the faces canonically associated to the ribbon graph — then correspond to central elements of the algebra of observables, and define a set of commuting projectors onto the subalgebra of observables on flat connections. The algebras of observables for all connections or for flat connections are topological invariants, depending only on the topology, respectively, of the punctured or closed surface canonically obtained by gluing annuli or discs along edges of the ribbon graph.


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Yossi Bokor ◽  
Katharine Turner ◽  
Christopher Williams

<p style='text-indent:20px;'>In this paper, we consider the simplest class of stratified spaces – linearly embedded graphs. We present an algorithm that learns the abstract structure of an embedded graph and models the specific embedding from a point cloud sampled from it. We use tools and inspiration from computational geometry, algebraic topology, and topological data analysis and prove the correctness of the identified abstract structure under assumptions on the embedding. The algorithm is implemented in the Julia package Skyler, which we used for the numerical simulations in this paper.</p>


Author(s):  
Shu Tian ◽  
Lihong Kang ◽  
Xiangwei Xing ◽  
Jing Tian ◽  
Chunzhuo Fan ◽  
...  

2019 ◽  
Vol 59 (9) ◽  
pp. 3981-3988 ◽  
Author(s):  
Jaechang Lim ◽  
Seongok Ryu ◽  
Kyubyong Park ◽  
Yo Joong Choe ◽  
Jiyeon Ham ◽  
...  

Author(s):  
Chun Wang ◽  
Shirui Pan ◽  
Ruiqi Hu ◽  
Guodong Long ◽  
Jing Jiang ◽  
...  

Graph clustering is a fundamental task which discovers communities or groups in networks. Recent studies have mostly focused on developing deep learning approaches to learn a compact graph embedding, upon which classic clustering methods like k-means or spectral clustering algorithms are applied. These two-step frameworks are difficult to manipulate and usually lead to suboptimal performance, mainly because the graph embedding is not goal-directed, i.e., designed for the specific clustering task. In this paper, we propose a goal-directed deep learning approach, Deep Attentional Embedded Graph Clustering (DAEGC for short). Our method focuses on attributed graphs to sufficiently explore the two sides of information in graphs. By employing an attention network to capture the importance of the neighboring nodes to a target node, our DAEGC algorithm encodes the topological structure and node content in a graph to a compact representation, on which an inner product decoder is trained to reconstruct the graph structure. Furthermore, soft labels from the graph embedding itself are generated to supervise a self-training graph clustering process, which iteratively refines the clustering results. The self-training process is jointly learned and optimized with the graph embedding in a unified framework, to mutually benefit both components. Experimental results compared with state-of-the-art algorithms demonstrate the superiority of our method.


Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 219 ◽  
Author(s):  
Sumet Mehta ◽  
Bi-Sheng Zhan ◽  
Xiang-Jun Shen

Neighborhood preserving embedding (NPE) is a classical and very promising supervised dimensional reduction (DR) technique based on a linear graph, which preserves the local neighborhood relations of the data points. However, NPE uses the K nearest neighbor (KNN) criteria for constructing an adjacent graph which makes it more sensitive to neighborhood size. In this article, we propose a novel DR method called weighted neighborhood preserving ensemble embedding (WNPEE). Unlike NPE, the proposed WNPEE constructs an ensemble of adjacent graphs with the number of nearest neighbors varying. With this graph ensemble building, WNPEE can obtain the low-dimensional projections with optimal embedded graph pursuing in a joint optimization manner. WNPEE can be applied in many machine learning fields, such as object recognition, data classification, signal processing, text categorization, and various deep learning tasks. Extensive experiments on Olivetti Research Laboratory (ORL), Georgia Tech, Carnegie Mellon University-Pose and Illumination Images (CMU PIE) and Yale, four face databases demonstrate that WNPEE achieves a competitive and better recognition rate than NPE and other comparative DR methods. Additionally, the proposed WNPEE achieves much lower sensitivity to the neighborhood size parameter as compared to the traditional NPE method while preserving more of the local manifold structure of the high-dimensional data.


Sign in / Sign up

Export Citation Format

Share Document