Random Relevant and Non-redundant Feature Subspaces for Co-training

Author(s):  
Yusuf Yaslan ◽  
Zehra Cataltepe
Keyword(s):  
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Luogeng Tian ◽  
Bailong Yang ◽  
Xinli Yin ◽  
Kai Kang ◽  
Jing Wu

In the past, most of the entity prediction methods based on embedding lacked the training of local core relationships, resulting in a deficiency in the end-to-end training. Aiming at this problem, we propose an end-to-end knowledge graph embedding representation method. It involves local graph convolution and global cross learning in this paper, which is called the TransC graph convolutional network (TransC-GCN). Firstly, multiple local semantic spaces are divided according to the largest neighbor. Secondly, a translation model is used to map the local entities and relationships into a cross vector, which serves as the input of GCN. Thirdly, through training and learning of local semantic relations, the best entities and strongest relations are found. The optimal entity relation combination ranking is obtained by evaluating the posterior loss function based on the mutual information entropy. Experiments show that this paper can obtain local entity feature information more accurately through the convolution operation of the lightweight convolutional neural network. Also, the maximum pooling operation helps to grasp the strong signal on the local feature, thereby avoiding the globally redundant feature. Compared with the mainstream triad prediction baseline model, the proposed algorithm can effectively reduce the computational complexity while achieving strong robustness. It also increases the inference accuracy of entities and relations by 8.1% and 4.4%, respectively. In short, this new method can not only effectively extract the local nodes and relationship features of the knowledge graph but also satisfy the requirements of multilayer penetration and relationship derivation of a knowledge graph.


2020 ◽  
Vol 17 (6) ◽  
pp. 2684-2688
Author(s):  
Deepak Vats ◽  
Avinash Sharma

It has been spotted an exponential growth in terms of dimension in real world data. Some example of higher dimensional data may includes speech signal, sensor data, medical data, criminal data and data related to recommendation process for different field like news, movies (Netflix) and e-commerce. To empowering learning accuracy in the area of machine learning and enhancing mining performance one need to remove redundant feature and feature not relevant for mining and learning task from this high dimension dataset. There exist many supervised and unsupervised methodologies in literature to perform dimension reduction. The objective of paper is to present most prominent methodologies related to the field of dimension reduction and highlight advantages along with disadvantages of these algorithms which can act as starting point for beginners of this field.


Author(s):  
Phillip Taylor ◽  
Nathan Griffths ◽  
Abhir Bhalerao ◽  
Thomas Popham ◽  
Xu Zhou ◽  
...  

2021 ◽  
Author(s):  
Franklin Parrales-Bravo ◽  
Joel Torres-Urresto ◽  
Dayannara Avila-Maldonado ◽  
Julio Barzola-Monteses

Sign in / Sign up

Export Citation Format

Share Document