redundant feature
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 13)

H-INDEX

6
(FIVE YEARS 2)

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Luogeng Tian ◽  
Bailong Yang ◽  
Xinli Yin ◽  
Kai Kang ◽  
Jing Wu

In the past, most of the entity prediction methods based on embedding lacked the training of local core relationships, resulting in a deficiency in the end-to-end training. Aiming at this problem, we propose an end-to-end knowledge graph embedding representation method. It involves local graph convolution and global cross learning in this paper, which is called the TransC graph convolutional network (TransC-GCN). Firstly, multiple local semantic spaces are divided according to the largest neighbor. Secondly, a translation model is used to map the local entities and relationships into a cross vector, which serves as the input of GCN. Thirdly, through training and learning of local semantic relations, the best entities and strongest relations are found. The optimal entity relation combination ranking is obtained by evaluating the posterior loss function based on the mutual information entropy. Experiments show that this paper can obtain local entity feature information more accurately through the convolution operation of the lightweight convolutional neural network. Also, the maximum pooling operation helps to grasp the strong signal on the local feature, thereby avoiding the globally redundant feature. Compared with the mainstream triad prediction baseline model, the proposed algorithm can effectively reduce the computational complexity while achieving strong robustness. It also increases the inference accuracy of entities and relations by 8.1% and 4.4%, respectively. In short, this new method can not only effectively extract the local nodes and relationship features of the knowledge graph but also satisfy the requirements of multilayer penetration and relationship derivation of a knowledge graph.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Bo Liu ◽  
Ning Yang ◽  
Xiangwei Han ◽  
Chen Liu

Passing is a relatively basic technique in volleyball. In volleyball passing teaching, training the correct passing technique plays a very important role. The correct pass can not only accurately grasp the direction of the ball point and the drop point but also effectively connect the defense and the offense. In order to improve the efficiency and quality of volleyball passing training, improve the precise extraction of sport targets, reduce redundant feature information, and improve the generalization performance and nonlinear fitting capabilities of the algorithm, this paper studies volleyball based on the nested convolutional neural network model and passing training wrong movement detection method. The structure of the convolutional neural network is improved by nesting mlpconv layers, and the Gaussian mixture model is used to effectively and accurately extract the foreground objects in the video. The nested multilayer mlpconv layer automatically learns the deep-level features of the foreground target, and the generated feature map is vectorized and input to the Softmax classifier connected to the fully connected layer for passing wrong behavior detection in volleyball training. Based on the detection of nearly 1,000 athletes’ action datasets, the simulation experiment results show that the algorithm reduces the acquisition of redundant information and shortens the calculation time and learning time of the algorithm, and the improved convolutional neural network has generalization performance and nonlinearity. The fitting ability has been improved, and the detection of abnormal volleyball passing behaviors has achieved a higher accuracy rate.


2021 ◽  
Author(s):  
Franklin Parrales-Bravo ◽  
Joel Torres-Urresto ◽  
Dayannara Avila-Maldonado ◽  
Julio Barzola-Monteses

2021 ◽  
Vol 16 ◽  
Author(s):  
Fu-Ying Dao ◽  
Hao Lv ◽  
Zhao-Yue Zhang ◽  
Hao Lin

Background: Dimension disaster is often associated with feature extraction. The extracted features may contain more redundant feature information, which leads to the limitation of computing ability and overfitting problems. Objective: Feature selection is an important strategy to overcome the problems from dimension disaster. In most machine learning tasks, features determine the upper limit of the model performance. Therefore, more and more feature selection methods should be developed to optimize features. Methods: In this paper, we introduce a new technique to optimize sequence features based on the binomial distribution (BD). Firstly, the principle of the binomial distribution algorithm is introduced in detail. Then, the proposed algorithm is compared with other commonly used feature selection methods on three different types of datasets by using a Random Forest classifier with the same parameters. Results and Conclusion: The results confirm that BD has a promising improvement in feature selection and classification accuracy. Finally, we provide the source code and executable program package (http://lin-group.cn/server/BDselect/), by which users can easily perform our algorithm in their research.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Shunhao Jin ◽  
Fenlin Liu ◽  
Chunfang Yang ◽  
Yuanyuan Ma ◽  
Yuan Liu

Currently, the popular Rich Model steganalysis features usually contain a large number of redundant feature components which may bring “curse of dimensionality” and large computation cost, but the existing feature selection methods are difficult to effectively reduce the dimensionality when there are many strongly correlated effective feature components. This paper proposes a novel selection method for Rich Model steganalysis features. First, the separability of each feature component in the submodels of Rich Model is measured based on the Fisher criterion, and the feature components are sorted in the descending order based on the separability. Second, the correlation coefficient between any two feature components in each submodel is calculated, and feature selection is performed according to the Fisher value of each component and the correlation coefficients. Finally, the selected submodels are combined as the final steganalysis feature. The results show that the proposed feature selection method can effectively reduce the dimensionalities of JPEG domain and spatial domain Rich Model steganalysis features without affecting the detection accuracies.


2020 ◽  
Vol 17 (6) ◽  
pp. 2684-2688
Author(s):  
Deepak Vats ◽  
Avinash Sharma

It has been spotted an exponential growth in terms of dimension in real world data. Some example of higher dimensional data may includes speech signal, sensor data, medical data, criminal data and data related to recommendation process for different field like news, movies (Netflix) and e-commerce. To empowering learning accuracy in the area of machine learning and enhancing mining performance one need to remove redundant feature and feature not relevant for mining and learning task from this high dimension dataset. There exist many supervised and unsupervised methodologies in literature to perform dimension reduction. The objective of paper is to present most prominent methodologies related to the field of dimension reduction and highlight advantages along with disadvantages of these algorithms which can act as starting point for beginners of this field.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 181763-181781
Author(s):  
Syed Fawad Hussain ◽  
Hafiz Zaheer-Ud-Din Babar ◽  
Akhtar Khalil ◽  
Rashad M. Jillani ◽  
Muhammad Hanif ◽  
...  

2019 ◽  
Vol 118 ◽  
pp. 148-158 ◽  
Author(s):  
Babajide O. Ayinde ◽  
Tamer Inanc ◽  
Jacek M. Zurada

2019 ◽  
Author(s):  
Rishabh Raj ◽  
Dar Dahlen ◽  
Kyle Duyck ◽  
C. Ron Yu

AbstractThe brain has a remarkable ability to recognize objects from noisy or corrupted sensory inputs. How this cognitive robustness is achieved computationally remains unknown. We present a coding paradigm, which encodes structural dependence among features of the input and transforms various forms of the same input into the same representation. The paradigm, through dimensionally expanded representation and sparsity constraint, allows redundant feature coding to enhance robustness and is efficient in representing objects. We demonstrate consistent representations of visual and olfactory objects under conditions of occlusion, high noise or with corrupted coding units. Robust face recognition is achievable without deep layers or large training sets. The paradigm produces both complex and simple receptive fields depending on learning experience, thereby offers a unifying framework of sensory processing.One line abstractWe present a framework of efficient coding of objects as a combination of structurally dependent feature groups that is robust against noise and corruption.


Sign in / Sign up

Export Citation Format

Share Document