sparse matrix representation
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 6)

H-INDEX

5
(FIVE YEARS 0)

2022 ◽  
Vol 2022 ◽  
pp. 1-6
Author(s):  
Xinxin Wang

With the frequent occurrence of international trade communication, in order to improve the quality of communication, this paper proposes a study on the translation of international trade English phrases and grammar. First, with the help of a rectangular window function, the composition principle of international trade English phrases is determined. Then, the horizontal feature aggregation point method is introduced to build a mathematical model of the characteristic identification of the English phrases. Finally, the sparse matrix representation of the source phrase is constructed to complete the extraction and preprocessing of the English phrase features. The input English sentence is converted into the output sentence, the form and POS factors of the English semantic translation are extracted, and a lemma is introduced to obtain the surface form of international trade English language factors. According to the international trade grammar analysis method, this paper decomposes the translation model, decomposes English sentences into small phrases for translation, and completes the research on international trade English phrase and grammar translation. The experimental results show that this method has high accuracy in the feature extraction of international trade English phrases, and the error rate is low, which is feasible.


Author(s):  
Tuan Quoc Nguyen ◽  
Katsumi Inoue ◽  
Chiaki Sakama

AbstractAlgebraic characterization of logic programs has received increasing attention in recent years. Researchers attempt to exploit connections between linear algebraic computation and symbolic computation to perform logical inference in large-scale knowledge bases. In this paper, we analyze the complexity of the linear algebraic methods for logic programs and propose further improvement by using sparse matrices to embed logic programs in vector spaces. We show its great power of computation in reaching the fixed point of the immediate consequence operator. In particular, performance for computing the least models of definite programs is dramatically improved using the sparse matrix representation. We also apply the method to the computation of stable models of normal programs, in which the guesses are associated with initial matrices, and verify its effect when there are small numbers of negation. These results show good enhancement in terms of performance for computing consequences of programs and depict the potential power of tensorized logic programs.


2017 ◽  
Vol 36 (5) ◽  
pp. 59-69 ◽  
Author(s):  
J. S. Mueller-Roemer ◽  
C. Altenhofen ◽  
A. Stork

Sign in / Sign up

Export Citation Format

Share Document