tensor space
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 15)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Haiqiu Li

People usually use the method of job analysis to understand the requirements of each job in terms of personnel characteristics, at the same time use the method of psychological measurement to understand the psychological characteristics of each person, and then put the personnel in the appropriate position by matching them with each other. With the development of the information age, massive and complex data are produced. How to accurately extract the effective data needed by the industry from the big data is a very arduous task. In reality, personnel data are influenced by many factors, and the time series formed by it is more accidental and random and often has multilevel and multiscale characteristics. How to use a certain algorithm or data processing technology to effectively dig out the rules contained in the personnel information data and explore the personnel placement scheme has become an important issue. In this paper, a multilayer variable neural network model for complex big data feature learning is established to optimize the staffing scheme. At the same time, the learning model is extended from vector space to tensor space. The parameters of neural network are inversed by high-order backpropagation algorithm facing tensor space. Compared with the traditional multilayer neural network calculation model based on tensor space, the multimodal neural network calculation model can learn the characteristics of complex data quickly and accurately and has obvious advantages.


2021 ◽  
Vol 11 (20) ◽  
pp. 9703
Author(s):  
Han-joon Kim ◽  
Pureum Lim

Most text classification systems use machine learning algorithms; among these, naïve Bayes and support vector machine algorithms adapted to handle text data afford reasonable performance. Recently, given developments in deep learning technology, several scholars have used deep neural networks (recurrent and convolutional neural networks) to improve text classification. However, deep learning-based text classification has not greatly improved performance compared to that of conventional algorithms. This is because a textual document is essentially expressed as a vector (only), albeit with word dimensions, which compromises the inherent semantic information, even if the vector is (appropriately) transformed to add conceptual information. To solve this `loss of term senses’ problem, we develop a concept-driven deep neural network based upon our semantic tensor space model. The semantic tensor used for text representation features a dependency between the term and the concept; we use this to develop three deep neural networks for text classification. We perform experiments using three standard document corpora, and we show that our proposed methods are superior to both traditional and more recent learning methods.


2019 ◽  
pp. 1-25 ◽  
Author(s):  
G. I. LEHRER ◽  
R. B. ZHANG

The first fundamental theorem of invariant theory for the orthosymplectic supergroup scheme $\text{OSp}(m|2n)$ states that there is a full functor from the Brauer category with parameter $m-2n$ to the category of tensor representations of $\text{OSp}(m|2n)$ . This has recently been proved using algebraic supergeometry to relate the problem to the invariant theory of the general linear supergroup. In this work, we use the same circle of ideas to prove the second fundamental theorem for the orthosymplectic supergroup. Specifically, we give a linear description of the kernel of the surjective homomorphism from the Brauer algebra to endomorphisms of tensor space, which commute with the orthosymplectic supergroup. The main result has a clear and succinct formulation in terms of Brauer diagrams. Our proof includes, as special cases, new proofs of the corresponding second fundamental theorems for the classical orthogonal and symplectic groups, as well as their quantum analogues, which are independent of the Capelli identities. The results of this paper have led to the result that the map from the Brauer algebra ${\mathcal{B}}_{r}(m-2n)$ to endomorphisms of $V^{\otimes r}$ is an isomorphism if and only if $r<(m+1)(n+1)$ .


Author(s):  
Lipeng Zhang ◽  
Peng Zhang ◽  
Xindian Ma ◽  
Shuqin Gu ◽  
Zhan Su ◽  
...  

In the literature, tensors have been effectively used for capturing the context information in language models. However, the existing methods usually adopt relatively-low order tensors, which have limited expressive power in modeling language. Developing a higher-order tensor representation is challenging, in terms of deriving an effective solution and showing its generality. In this paper, we propose a language model named Tensor Space Language Model (TSLM), by utilizing tensor networks and tensor decomposition. In TSLM, we build a high-dimensional semantic space constructed by the tensor product of word vectors. Theoretically, we prove that such tensor representation is a generalization of the n-gram language model. We further show that this high-order tensor representation can be decomposed to a recursive calculation of conditional probability for language modeling. The experimental results on Penn Tree Bank (PTB) dataset and WikiText benchmark demonstrate the effectiveness of TSLM.


2019 ◽  
Author(s):  
Yaqub Jonmohamadi ◽  
Suresh Muthukumaraswamy ◽  
Joseph Chen ◽  
Jonathan Roberts ◽  
Ross Crawford ◽  
...  

AbstractThe fusion of simultaneously recorded EEG and fMRI data is of great value to neuroscience research due to the complementary properties of the individual modalities. Traditionally, techniques such as PCA and ICA, which rely on strong strong non-physiological assumptions such as orthogonality and statistical independence, have been used for this purpose. Recently, tensor decomposition techniques such as parallel factor analysis have gained more popularity in neuroimaging applications as they are able to inherently contain the multidimensionality of neuroimaging data and achieve uniqueness in decomposition without imposing strong assumptions. Previously, the coupled matrix-tensor decomposition (CMTD) has been applied for the fusion of the EEG and fMRI. Only recently the coupled tensor-tensor decomposition (CTTD) has been proposed. Here for the first time, we propose the use of CTTD of a 4th order EEG tensor (space, time, frequency, and participant) and 3rd order fMRI tensor (space, time, participant), coupled partially in time and participant domains, for the extraction of the task related features in both modalities. We used both the sensor-level and source-level EEG for the coupling. The phase shifted paradigm signals were incorporated as the temporal initializers of the CTTD to extract the task related features. The validation of the approach is demonstrated on simultaneous EEG-fMRI recordings from six participants performing an N-Back memory task. The EEG and fMRI tensors were coupled in 9 components out of which 7 components had a high correlation (more than 0.85) with the task. The result of the fusion recapitulates the well-known attention network as being positively, and the default mode network working negatively time-locked to the memory task.


Sign in / Sign up

Export Citation Format

Share Document