scholarly journals NEXT-BEST-VIEW METHOD BASED ON CONSECUTIVE EVALUATION OF TOPOLOGICAL RELATIONS

Author(s):  
K. O. Dierenbach ◽  
M. Weinmann ◽  
B. Jutzi

This work describes an iterative algorithm for estimating optimal viewpoints, so called next-best-views (NBVs). The goal is to incrementally construct a topological network from the scene during the consecutive acquisition of several views. Our approach is a hybrid method between a surface-based and a volumetric approach with a continuous model space. Hence, a new scan taken from an optimal position should either cover as much as possible from the unknown object surface in one single scan, or densify the existing data and close possible gaps. Based on the point density, we recover the essential and structural information of a scene based on the Growing Neural Gas (GNG) algorithm. From the created graph representation of topological relations, the density of the point cloud at each network node is estimated by approximating the volume of Voronoi cells. The NBV Finder selects a network node as NBV, which has the lowest point density. Our NBV method is self-terminating when all regions reach a predefined minimum point density or the change of the GNG error is zero. For evaluation, we use a Buddha statue with a rather simple surface geometry but still some concave parts and the Stanford Dragon with a more complex object surface containing occluded and concave parts. We demonstrate that our NBV method outperforms a “naive random” approach relying on uniformly distributed sensor positions in terms of efficiency, i.e. our proposed method reaches a desired minimum point density up to 20% faster with less scans.

Author(s):  
K. O. Dierenbach ◽  
M. Weinmann ◽  
B. Jutzi

This work describes an iterative algorithm for estimating optimal viewpoints, so called next-best-views (NBVs). The goal is to incrementally construct a topological network from the scene during the consecutive acquisition of several views. Our approach is a hybrid method between a surface-based and a volumetric approach with a continuous model space. Hence, a new scan taken from an optimal position should either cover as much as possible from the unknown object surface in one single scan, or densify the existing data and close possible gaps. Based on the point density, we recover the essential and structural information of a scene based on the Growing Neural Gas (GNG) algorithm. From the created graph representation of topological relations, the density of the point cloud at each network node is estimated by approximating the volume of Voronoi cells. The NBV Finder selects a network node as NBV, which has the lowest point density. Our NBV method is self-terminating when all regions reach a predefined minimum point density or the change of the GNG error is zero. For evaluation, we use a Buddha statue with a rather simple surface geometry but still some concave parts and the Stanford Dragon with a more complex object surface containing occluded and concave parts. We demonstrate that our NBV method outperforms a “naive random” approach relying on uniformly distributed sensor positions in terms of efficiency, i.e. our proposed method reaches a desired minimum point density up to 20% faster with less scans.


2018 ◽  
Vol 77 (11) ◽  
pp. 945-956 ◽  
Author(s):  
N. N. Kolchigin ◽  
M. N. Legenkiy ◽  
A. A. Maslovskiy ◽  
А. Demchenko ◽  
S. Vinnichenko ◽  
...  

Cancers ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 2111
Author(s):  
Bo-Wei Zhao ◽  
Zhu-Hong You ◽  
Lun Hu ◽  
Zhen-Hao Guo ◽  
Lei Wang ◽  
...  

Identification of drug-target interactions (DTIs) is a significant step in the drug discovery or repositioning process. Compared with the time-consuming and labor-intensive in vivo experimental methods, the computational models can provide high-quality DTI candidates in an instant. In this study, we propose a novel method called LGDTI to predict DTIs based on large-scale graph representation learning. LGDTI can capture the local and global structural information of the graph. Specifically, the first-order neighbor information of nodes can be aggregated by the graph convolutional network (GCN); on the other hand, the high-order neighbor information of nodes can be learned by the graph embedding method called DeepWalk. Finally, the two kinds of feature are fed into the random forest classifier to train and predict potential DTIs. The results show that our method obtained area under the receiver operating characteristic curve (AUROC) of 0.9455 and area under the precision-recall curve (AUPR) of 0.9491 under 5-fold cross-validation. Moreover, we compare the presented method with some existing state-of-the-art methods. These results imply that LGDTI can efficiently and robustly capture undiscovered DTIs. Moreover, the proposed model is expected to bring new inspiration and provide novel perspectives to relevant researchers.


2020 ◽  
Vol 34 (04) ◽  
pp. 4132-4139
Author(s):  
Huiting Hong ◽  
Hantao Guo ◽  
Yucheng Lin ◽  
Xiaoqing Yang ◽  
Zang Li ◽  
...  

In this paper, we focus on graph representation learning of heterogeneous information network (HIN), in which various types of vertices are connected by various types of relations. Most of the existing methods conducted on HIN revise homogeneous graph embedding models via meta-paths to learn low-dimensional vector space of HIN. In this paper, we propose a novel Heterogeneous Graph Structural Attention Neural Network (HetSANN) to directly encode structural information of HIN without meta-path and achieve more informative representations. With this method, domain experts will not be needed to design meta-path schemes and the heterogeneous information can be processed automatically by our proposed model. Specifically, we implicitly represent heterogeneous information using the following two methods: 1) we model the transformation between heterogeneous vertices through a projection in low-dimensional entity spaces; 2) afterwards, we apply the graph neural network to aggregate multi-relational information of projected neighborhood by means of attention mechanism. We also present three extensions of HetSANN, i.e., voices-sharing product attention for the pairwise relationships in HIN, cycle-consistency loss to retain the transformation between heterogeneous entity spaces, and multi-task learning with full use of information. The experiments conducted on three public datasets demonstrate that our proposed models achieve significant and consistent improvements compared to state-of-the-art solutions.


Perception ◽  
1994 ◽  
Vol 23 (5) ◽  
pp. 505-515 ◽  
Author(s):  
Emanuel Leeuwenberg ◽  
Peter Van der Helm ◽  
Rob Van Lier

Two models of object perception are compared: recognition by components (RBC), proposed by Biederman, and structural information theory (SIT), initially proposed by Leeuwenberg. According to RBC a complex object is decomposed into predefined elementary objects, called geons. According to SIT, the decomposition is guided by regularities in the object. It is assumed that the simplest of all possible interpretations of any object is perceptually preferred. The comparison deals with two aspects of the models. One is the representation of simple objects—various definitions of object axes are considered. It is shown that the more these definitions account for object regularity and thus the more they agree with SIT, the better the object representations predict object classification. Another topic concerns assumptions underlying the models: the identification of geons is mediated by cues which are supposed to be invariant under varying viewpoints of objects. It is argued that such cues are not based on this invariance but on the regularity of actual objects. The latter conclusion is in line with SIT. An advantage of RBC, however, is that it deals with the perceptual process from stimulus to interpretation, whereas SIT merely concerns the outcome of the process, not the process itself.


2020 ◽  
Vol 34 (03) ◽  
pp. 2991-2999 ◽  
Author(s):  
Xiao Shen ◽  
Quanyu Dai ◽  
Fu-lai Chung ◽  
Wei Lu ◽  
Kup-Sze Choi

In this paper, the task of cross-network node classification, which leverages the abundant labeled nodes from a source network to help classify unlabeled nodes in a target network, is studied. The existing domain adaptation algorithms generally fail to model the network structural information, and the current network embedding models mainly focus on single-network applications. Thus, both of them cannot be directly applied to solve the cross-network node classification problem. This motivates us to propose an adversarial cross-network deep network embedding (ACDNE) model to integrate adversarial domain adaptation with deep network embedding so as to learn network-invariant node representations that can also well preserve the network structural information. In ACDNE, the deep network embedding module utilizes two feature extractors to jointly preserve attributed affinity and topological proximities between nodes. In addition, a node classifier is incorporated to make node representations label-discriminative. Moreover, an adversarial domain adaptation technique is employed to make node representations network-invariant. Extensive experimental results demonstrate that the proposed ACDNE model achieves the state-of-the-art performance in cross-network node classification.


2021 ◽  
Author(s):  
Yingheng Wang ◽  
Yaosen Min ◽  
Erzhuo Shao ◽  
Ji Wu

ABSTRACTLearning generalizable, transferable, and robust representations for molecule data has always been a challenge. The recent success of contrastive learning (CL) for self-supervised graph representation learning provides a novel perspective to learn molecule representations. The most prevailing graph CL framework is to maximize the agreement of representations in different augmented graph views. However, existing graph CL frameworks usually adopt stochastic augmentations or schemes according to pre-defined rules on the input graph to obtain different graph views in various scales (e.g. node, edge, and subgraph), which may destroy topological semantemes and domain prior in molecule data, leading to suboptimal performance. Therefore, designing parameterized, learnable, and explainable augmentation is quite necessary for molecular graph contrastive learning. A well-designed parameterized augmentation scheme can preserve chemically meaningful structural information and intrinsically essential attributes for molecule graphs, which helps to learn representations that are insensitive to perturbation on unimportant atoms and bonds. In this paper, we propose a novel Molecular Graph Contrastive Learning with Parameterized Explainable Augmentations, MolCLE for brevity, that self-adaptively incorporates chemically significative information from both topological and semantic aspects of molecular graphs. Specifically, we apply deep neural networks to parameterize the augmentation process for both the molecular graph topology and atom attributes, to highlight contributive molecular substructures and recognize underlying chemical semantemes. Comprehensive experiments on a variety of real-world datasets demonstrate that our proposed method consistently outperforms compared baselines, which verifies the effectiveness of the proposed framework. Detailedly, our self-supervised MolCLE model surpasses many supervised counterparts, and meanwhile only uses hundreds of thousands of parameters to achieve comparative results against the state-of-the-art baseline, which has tens of millions of parameters. We also provide detailed case studies to validate the explainability of augmented graph views.CCS CONCEPTS• Mathematics of computing → Graph algorithms; • Applied computing → Bioinformatics; • Computing methodologies → Neural networks; Unsupervised learning.


2020 ◽  
Author(s):  
Mikel Joaristi

Unsupervised Graph Representation Learning methods learn a numerical representation of the nodes in a graph. The generated representations encode meaningful information about the nodes' properties, making them a powerful tool for tasks in many areas of study, such as social sciences, biology or communication networks. These methods are particularly interesting because they facilitate the direct use of standard Machine Learning models on graphs. Graph representation learning methods can be divided into two main categories depending on the information they encode, methods preserving the nodes connectivity information, and methods preserving nodes' structural information. Connectivity-based methods focus on encoding relationships between nodes, with neighboring nodes being closer together in the resulting latent space. On the other hand, structure-based methods generate a latent space where nodes serving a similar structural function in the network are encoded close to each other, independently of them being connected or even close to each other in the graph. While there are a lot of works that focus on preserving nodes' connectivity information, only a few works study the problem of encoding nodes' structure, specially in an unsupervised way. In this dissertation, we demonstrate that properly encoding nodes' structural information is fundamental for many real-world applications, as it can be leveraged to successfully solve many tasks where connectivity-based methods fail. One concrete example is presented first. In this example, the task consists of detecting malicious entities in a real-world financial network. We show that to solve this problem, connectivity information is not enough and show how leveraging structural information provides considerable performance improvements. This particular example pinpoints the need for further research on the area of structural graph representation learning, together with the limitations of the previous state-of-the-art. We use the acquired knowledge as a starting point and inspiration for the research and development of three independent unsupervised structural graph representation learning methods: Structural Iterative Representation learning approach for Graph Nodes (SIR-GN), Structural Iterative Lexicographic Autoencoded Node Representation (SILA), and Sparse Structural Node Representation (SparseStruct). We show how each of our methods tackles specific limitations on the previous state-of-the-art on structural graph representation learning such as scalability, representation meaning, and lack of formal proof that guarantees the preservation of structural properties. We provide an extensive experimental section where we compare our three proposed methods to the current state-of-the-art on both connectivity-based and structure-based representation learning methods. Finally, in this dissertation, we look at extensions of the basic structural graph representation learning problem. We study the problem of temporal structural graph representation. We also provide a method for representation explainability.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1466 ◽  
Author(s):  
Berit Schmitz ◽  
Christoph Holst ◽  
Tomislav Medic ◽  
Derek Lichti ◽  
Heiner Kuhlmann

As laser scanning technology has improved a lot in recent years, terrestrial laser scanners (TLS) have become popular devices for surveying tasks with high accuracy demands, such as deformation analyses. For this reason, finding a stochastic model for TLS measurements is very important in order to get statistically reliable results. The measurement accuracy of laser scanners—especially of their rangefinders—is strongly dependent on the scanning conditions, such as the scan configuration, the object surface geometry and the object reflectivity. This study demonstrates a way to determine the intensity-dependent range precision of 3D points for terrestrial laser scanners that measure in 3D mode by using range residuals in laser beam direction of a best plane fit. This method does not require special targets or surfaces aligned perpendicular to the scanner, which allows a much quicker and easier determination of the stochastic properties of the rangefinder. Furthermore, the different intensity types—raw and scaled—intensities are investigated since some manufacturers only provide scaled intensities. It is demonstrated that the intensity function can be derived from raw intensity values as written in literature, and likewise—in a restricted measurement volume—from scaled intensity values if the raw intensities are not available.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-39
Author(s):  
Mikel Joaristi ◽  
Edoardo Serra

Graph representation learning methods have attracted an increasing amount of attention in recent years. These methods focus on learning a numerical representation of the nodes in a graph. Learning these representations is a powerful instrument for tasks such as graph mining, visualization, and hashing. They are of particular interest because they facilitate the direct use of standard machine learning models on graphs. Graph representation learning methods can be divided into two main categories: methods preserving the connectivity information of the nodes and methods preserving nodes’ structural information. Connectivity-based methods focus on encoding relationships between nodes, with connected nodes being closer together in the resulting latent space. While methods preserving structure generate a latent space where nodes serving a similar structural function in the network are encoded close to each other, independently of them being connected or even close to each other in the graph. While there are a lot of works that focus on preserving node connectivity, only a few works focus on preserving nodes’ structure. Properly encoding nodes’ structural information is fundamental for many real-world applications as it has been demonstrated that this information can be leveraged to successfully solve many tasks where connectivity-based methods usually fail. A typical example is the task of node classification, i.e., the assignment or prediction of a particular label for a node. Current limitations of structural representation methods are their scalability, representation meaning, and no formal proof that guaranteed the preservation of structural properties. We propose a new graph representation learning method, called Structural Iterative Representation learning approach for Graph Nodes ( SIR-GN ). In this work, we propose two variations ( SIR-GN: GMM and SIR-GN: K-Means ) and show how our best variation SIR-GN: K-Means : (1) theoretically guarantees the preservation of graph structural similarities, (2) provides a clear meaning about its representation and a way to interpret it with a specifically designed attribution procedure, and (3) is scalable and fast to compute. In addition, from our experiment, we show that SIR-GN: K-Means is often better or, in the worst-case comparable than the existing structural graph representation learning methods present in the literature. Also, we empirically show its superior scalability and computational performance when compared to other existing approaches.


Sign in / Sign up

Export Citation Format

Share Document