column vector
Recently Published Documents


TOTAL DOCUMENTS

41
(FIVE YEARS 12)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Vol 37 ◽  
pp. 692-697
Author(s):  
LeRoy Beasley

 Let $m$ and $n$ be positive integers, and let $R =(r_1, \ldots, r_m)$ and $S =(s_1,\ldots, s_n)$ be nonnegative integral vectors. Let $A(R,S)$ be the set of all $m \times n$ $(0,1)$-matrices with row sum vector $R$ and column vector $S$. Let $R$ and $S$ be nonincreasing, and let $F(R)$ be the $m \times n$ $(0,1)$-matrix where for each $i$, the $i^{th}$ row of $F(R,S)$ consists of $r_i$ 1's followed by $n-r_i$ 0's. Let $A\in A(R,S)$. The discrepancy of A, $disc(A)$, is the number of positions in which $F(R)$ has a 1 and $A$ has a 0. In this paper, we investigate the possible discrepancy of $A^t$ versus the discrepancy of $A$. We show that if the discrepancy of $A$ is $\ell$, then the discrepancy of the transpose of $A$ is at least $\frac{\ell}{2}$ and at most $2\ell$. These bounds are tight.


Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 193
Author(s):  
Xue Jiang ◽  
Kai Cui

Multivariate polynomial interpolation plays a crucial role both in scientific computation and engineering application. Exploring the structure of the D-invariant (closed under differentiation) polynomial subspaces has significant meaning for multivariate Hermite-type interpolation (especially ideal interpolation). We analyze the structure of a D-invariant polynomial subspace Pn in terms of Cartesian tensors, where Pn is a subspace with a maximal total degree equal to n,n≥1. For an arbitrary homogeneous polynomial p(k) of total degree k in Pn, p(k) can be rewritten as the inner products of a kth order symmetric Cartesian tensor and k column vectors of indeterminates. We show that p(k) can be determined by all polynomials of a total degree one in Pn. Namely, if we treat all linear polynomials on the basis of Pn as a column vector, then this vector can be written as a product of a coefficient matrix A(1) and a column vector of indeterminates; our main result shows that the kth order symmetric Cartesian tensor corresponds to p(k) is a product of some so-called relational matrices and A(1).


Author(s):  
Ryohei Sasaki ◽  
Katsumi Konishi ◽  
Tomohiro Takahashi ◽  
Toshihiro Furukawa

AbstractThis paper deals with a problem of matrix completion in which each column vector of the matrix belongs to a low-dimensional differentiable manifold (LDDM), with the target matrix being high or full rank. To solve this problem, algorithms based on polynomial mapping and matrix-rank minimization (MRM) have been proposed; such methods assume that each column vector of the target matrix is generated as a vector in a low-dimensional linear subspace (LDLS) and mapped to a pth order polynomial and that the rank of a matrix whose column vectors are dth monomial features of target column vectors is deficient. However, a large number of columns and observed values are needed to strictly solve the MRM problem using this method when p is large; therefore, this paper proposes a new method for obtaining the solution by minimizing the rank of the submatrix without transforming the target matrix, so as to obtain high estimation accuracy even when the number of columns is small. This method is based on the assumption that an LDDM can be approximated locally as an LDLS to achieve high completion accuracy without transforming the target matrix. Numerical examples show that the proposed method has a higher accuracy than other low-rank approaches.


2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
Maxwell T. Hansen ◽  
Fernando Romero-López ◽  
Stephen R. Sharpe

We have found an error in a statement following eq. (2.5) of our paper, concerning the function f (a, b, k) that first appears in that equation. The issue arises in the statement that it is convenient to take the function f (a, b, k) to be exchange symmetric with respect to its three arguments. This has the unwanted consequence of making six of the seven operators in the column vector of eq. (2.4) identically equal. This, in turn, implies that many operators are identically zero in the definite isospin basis, considered in section 2.4. To repair this, the last sentence of the paragraph containing eq. (2.4), starting “It is convenient for the subsequent…”, should be removed, as should footnote 3 and the next paragraph, beginning with “At this point, the reader may wonder why…”.


2021 ◽  
Author(s):  
Ryohei Sasaki ◽  
Katsumi Konishi ◽  
Tomohiro Takahashi ◽  
Toshihiro Furukawa

Abstract This paper deals with a problem of matrix completion in which each column vector of the matrix belongs to a low-dimensional differentiable manifold (LDDM), with the target matrix being high or full rank. To solve this problem, algorithms based on polynomial mapping and matrix-rank minimization (MRM) have been proposed; such methods assume that each column vector of the target matrix is generated as a vector in a low-dimensional linear subspace (LDLS) and mapped to a p-th order polynomial, and that the rank of a matrix whose column vectors are d-th monomial features of target column vectors is defficient. However, a large number of columns and observed values are needed to strictly solve the MRM problem using this method when p is large; therefore, this paper proposes a new method for obtaining the solution by minimizing the rank of the submatrix without transforming the target matrix, so as to obtain high estimation accuracy even when the number of columns is small. This method is based on the assumption that an LDDM can be approximated locally as an LDLS to achieve high completion accuracy without transforming the target matrix. Numerical examples show that the proposed method has a higher accuracy than other low-rank approaches.


2020 ◽  
Author(s):  
Cody A Freas ◽  
Marcia L Spetch ◽  
Jenna Congdon

The desert harvester ant (Veromessor pergandei) employs a mixture of social and individual navigational strategies at separate stages of their foraging trip. Individuals leave the nest along a pheromone-based column, travelling 3-40m before spreading out to forage individually in a fan. Foragers use path integration while in this fan, accumulating a direction and distance estimate (vector) to return to the end of the column (column head), yet foragers’ potential use of path integration in the pheromone-based column is less understood. Here we show foragers rely on path integration both in the foraging fan as well as while in the column to return to the nest, using separate vectors depending on their current foraging stage in the fan or column. Returning foragers displaced while in the fan oriented and travelled to the column head location while those displaced after reaching the column travel in the nest direction, signifying the maintenance of a two-vector system with separate fan and column vectors directing a forager to two separate spatial locations. Interestingly, the trail pheromone and not the surrounding terrestrial cues mediate use of these distinct vectors, as fan foragers briefly exposed to the pheromone cues of the column in isolation altered their paths to a combination of the fan and column vectors. The pheromone cue acts as a contextual cue triggering both the retrieval of the column vector memory and its integration with the forager’s current fan vector.


2020 ◽  
Author(s):  
Ryohei Sasaki ◽  
Katsumi Konishi ◽  
Tomohiro Takahashi ◽  
Toshihiro Furukawa

Abstract This paper deals with a problem of matrix completion in which each column vector of the matrix belongs to a low-dimensional differentiable manifold (LDDM), with the target matrix being high or full rank. To solve this problem, algorithms based on polynomial mapping and matrix-rank minimization (MRM) have been proposed; such methods assume that each column vector of the target matrix is generated as a vector in a low-dimensional linear subspace (LDLS) and mapped to a p-th order polynomial, and that the rank of a matrix whose column vectors are d-th monomial features of target column vectors is deficient. However, a large number of columns and observed values are needed to strictly solve the MRM problem using this method when p is large; therefore, this paper proposes a new method for obtaining the solution by minimizing the rank of the submatrix without transforming the target matrix, so as to obtain high estimation accuracy even when the number of columns is small. This method is based on the assumption that an LDDM can be approximated locally as an LDLS to achieve high completion accuracy without transforming the target matrix. Numerical examples show that the proposed method has a higher accuracy than other low-rank approaches.


2020 ◽  
Vol 70 (3) ◽  
pp. 505-526
Author(s):  
Yichao Chen ◽  
Jonathan L. Gross ◽  
Toufik Mansour ◽  
Thomas W. Tucker

AbstractGiven a finite graph H, the nth member Gn of an H-linear sequence is obtained recursively by attaching a disjoint copy of H to the last copy of H in Gn−1 by adding edges or identifying vertices, always in the same way. The genus polynomial ΓG(z) of a graph G is the generating function enumerating all orientable embeddings of G by genus. Over the past 30 years, most calculations of genus polynomials ΓGn(z) for the graphs in a linear family have been obtained by partitioning the embeddings of Gn into types 1, 2, …, k with polynomials $\begin{array}{} \Gamma_{G_n}^j \end{array}$ (z), for j = 1, 2, …, k; from these polynomials, we form a column vector $\begin{array}{} V_n(z) = [\Gamma_{G_n}^1(z), \Gamma_{G_n}^2(z), \ldots, \Gamma_{G_n}^k(z)]^t \end{array}$ that satisfies a recursion Vn(z) = M(z)Vn−1(z), where M(z) is a k × k matrix of polynomials in z. In this paper, the Cayley-Hamilton theorem is used to derive a kth degree linear recursion for Γn(z), allowing us to avoid the partitioning, thereby yielding a reduction from k2 multiplications of polynomials to k such multiplications. Moreover, that linear recursion can facilitate proofs of real-rootedness and log-concavity of the polynomials. We illustrate with examples.


The 2D aspects of Computer Graphics such as vector primitives and 2D transformations are important in creating 2D content. The Transformation are the effective means of shifting or changing the dimensions and orientations of images in the most effective way. If we fail to transform the object in terms of displacement ,enlargement, orientation, we may land up in creating something that is distorted and processessing a distorted object is not acceptable The usual practice of defining transformations is straight forward. The transformed object can be obtained by coupling original object with the transformation vectors .The main challenge is how to evaluate it. The usual practice is standard Column Vector form. The alternative Row Vector Form is also known approach but what matters is the sequence of operations that make these both approaches worth mentionin .While doing so our analysis on 2D content keeps our knowledge flawless and takes it a step further as far as Image Processing is concerned. Such analytical study is very vital since most of the content created, acquired, reproduced, and visualized in 2D needs to be mapped on to 3D.This paper describes the transformations(Translation,Scaling and Rotation) in the both Column and Row Vectar Approach. This paper aims in providing a clear sequence of calculations which differ in both approaches


Sign in / Sign up

Export Citation Format

Share Document