scholarly journals Towards Preprocessing Guidelines for Neural Network Embedding of Customer Behavior in Digital Retail

Author(s):  
Douglas Cirqueira ◽  
Markus Helfert ◽  
Marija Bezbradica
Author(s):  
Yang Fang ◽  
Xiang Zhao ◽  
Zhen Tan

Network Embedding (NE) is an important method to learn the representations of network via a low-dimensional space. Conventional NE models focus on capturing the structure information and semantic information of vertices while neglecting such information for edges. In this work, we propose a novel NE model named BimoNet to capture both the structure and semantic information of edges. BimoNet is composed of two parts, i.e., the bi-mode embedding part and the deep neural network part. For bi-mode embedding part, the first mode named add-mode is used to express the entity-shared features of edges and the second mode named subtract-mode is employed to represent the entity-specific features of edges. These features actually reflect the semantic information. For deep neural network part, we firstly regard the edges in a network as nodes, and the vertices as links, which will not change the overall structure of the whole network. Then we take the nodes' adjacent matrix as the input of the deep neural network as it can obtain similar representations for nodes with similar structure. Afterwards, by jointly optimizing the objective function of these two parts, BimoNet could preserve both the semantic and structure information of edges. In experiments, we evaluate BimoNet on three real-world datasets and task of relation extraction, and BimoNet is demonstrated to outperform state-of-the-art baseline models consistently and significantly.


2019 ◽  
Vol 2020 (1) ◽  
pp. 1-29 ◽  
Author(s):  
Andrea Gabrielli ◽  
Ronald Richman ◽  
Mario V. Wüthrich

2022 ◽  
Author(s):  
Arata Shirakami ◽  
Takeshi Hase ◽  
Yuki Yamaguchi ◽  
Masanori Shimono

Abstract Our brain works as a vast and complex network system. We need to compress the networks to extract simple principles of network patterns and interpret these paradigms to better comprehend their complexities. This study treats this simplification process using a two-step analysis of topological patterns of functional connectivities that were produced from electrical activities of ~1000 neurons from acute slices of mouse brains [Kajiwara et al. 2021] As the first step, we trained an artificial neural network system called neural network embedding (NNE) and automatically compressed the functional connectivities. As the second step, we widely compared the compressed features with 15 representative network metrics, having clear interpretations, including not only common metrics, such as centralities clusters and modules but also newly developed network metrics. The result demonstrates not only the fact that the newly developed network metrics could complementarily explain the features of what was compressed by the NNE method but was previously relatively hard to explain using common metrics such as hubs, clusters and communities. This NNE method surpasses the limitations of commonly used human-made metrics but also provides the possibility that recognizing our own limitations drives us to extend interpretable targets by developing new network metrics.


Author(s):  
Mohammadreza Armandpour ◽  
Patrick Ding ◽  
Jianhua Huang ◽  
Xia Hu

Many recent network embedding algorithms use negative sampling (NS) to approximate a variant of the computationally expensive Skip-Gram neural network architecture (SGA) objective. In this paper, we provide theoretical arguments that reveal how NS can fail to properly estimate the SGA objective, and why it is not a suitable candidate for the network embedding problem as a distinct objective. We show NS can learn undesirable embeddings, as the result of the “Popular Neighbor Problem.” We use the theory to develop a new method “R-NS” that alleviates the problems of NS by using a more intelligent negative sampling scheme and careful penalization of the embeddings. R-NS is scalable to large-scale networks, and we empirically demonstrate the superiority of R-NS over NS for multi-label classification on a variety of real-world networks including social networks and language networks.


Sign in / Sign up

Export Citation Format

Share Document