full rank
Recently Published Documents


TOTAL DOCUMENTS

390
(FIVE YEARS 117)

H-INDEX

22
(FIVE YEARS 4)

Minerals ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 97
Author(s):  
Georgy Alexandrovich Peshkov ◽  
Evgeny Mikhailovich Chekhonin ◽  
Dimitri Vladilenovich Pissarenko

Some of the simplifying assumptions frequently used in basin modelling may adversely impact the quality of the constructed models. One such common assumption consists of using a laterally homogeneous crustal basement, despite the fact that lateral variations in its properties may significantly affect the thermal evolution of the model. We propose a new method for the express evaluation of the impact of the basement’s heterogeneity on thermal history reconstruction and on the assessment of maturity of the source rock. The proposed method is based on reduced-rank inversion, aimed at a simultaneous reconstruction of the petrophysical properties of the heterogeneous basement and of its geometry. The method uses structural information taken from geological maps of the basement and gravity anomaly data. We applied our method to a data collection from Western Siberia and carried out a two-dimensional reconstruction of the evolution of the basin and of the lithosphere. We performed a sensitivity analysis of the reconstructed basin model to assess the effect of uncertainties in the basement’s density and its thermal conductivity for the model’s predictions. The proposed method can be used as an express evaluation tool to assess the necessity and relevance of laterally heterogeneous parametrisations prior to a costly three-dimensional full-rank basin modelling. The method is generally applicable to extensional basins except for salt tectonic provinces.


Author(s):  
Maher Nouiehed ◽  
Meisam Razaviyayn

With the increasing popularity of nonconvex deep models, developing a unifying theory for studying the optimization problems that arise from training these models becomes very significant. Toward this end, we present in this paper a unifying landscape analysis framework that can be used when the training objective function is the composite of simple functions. Using the local openness property of the underlying training models, we provide simple sufficient conditions under which any local optimum of the resulting optimization problem is globally optimal. We first completely characterize the local openness of the symmetric and nonsymmetric matrix multiplication mapping. Then we use our characterization to (1) provide a simple proof for the classical result of Burer-Monteiro and extend it to noncontinuous loss functions; (2) show that every local optimum of two-layer linear networks is globally optimal. Unlike many existing results in the literature, our result requires no assumption on the target data matrix [Formula: see text], and input data matrix [Formula: see text]; (3) develop a complete characterization of the local/global optima equivalence of multilayer linear neural networks (we provide various counterexamples to show the necessity of each of our assumptions); and (4) show global/local optima equivalence of overparameterized nonlinear deep models having a certain pyramidal structure. In contrast to existing works, our result requires no assumption on the differentiability of the activation functions and can go beyond “full-rank” cases.


2021 ◽  
Vol 12 ◽  
Author(s):  
Chia-Wen Chen ◽  
Wen-Chung Wang ◽  
Magdalena Mo Ching Mok ◽  
Ronny Scherer

Compositional items – a form of forced-choice items – require respondents to allocate a fixed total number of points to a set of statements. To describe the responses to these items, the Thurstonian item response theory (IRT) model was developed. Despite its prominence, the model requires that items composed of parts of statements result in a factor loading matrix with full rank. Without this requirement, the model cannot be identified, and the latent trait estimates would be seriously biased. Besides, the estimation of the Thurstonian IRT model often results in convergence problems. To address these issues, this study developed a new version of the Thurstonian IRT model for analyzing compositional items – the lognormal ipsative model (LIM) – that would be sufficient for tests using items with all statements positively phrased and with equal factor loadings. We developed an online value test following Schwartz’s values theory using compositional items and collected response data from a sample size of N = 512 participants with ages from 13 to 51 years. The results showed that our LIM had an acceptable fit to the data, and that the reliabilities exceeded 0.85. A simulation study resulted in good parameter recovery, high convergence rate, and the sufficient precision of estimation in the various conditions of covariance matrices between traits, test lengths and sample sizes. Overall, our results indicate that the proposed model can overcome the problems of the Thurstonian IRT model when all statements are positively phrased and factor loadings are similar.


2021 ◽  
Vol 37 ◽  
pp. 598-612
Author(s):  
Irwin S. Pressman

This work studies the kernel of a linear operator associated with the generalized k-fold commutator. Given a set $\mathfrak{A}= \left\{ A_{1}, \ldots ,A_{k} \right\}$ of real $n \times n$ matrices, the commutator is denoted by$[A_{1}| \ldots |A_{k}]$. For a fixed set of matrices $\mathfrak{A}$ we introduce a multilinear skew-symmetric linear operator $T_{\mathfrak{A}}(X)=T(A_{1}, \ldots ,A_{k})[X]=[A_{1}| \ldots |A_{k} |X] $. For fixed $n$ and $k \ge 2n-1, \; T_{\mathfrak{A}} \equiv 0$ by the Amitsur--Levitski Theorem [2] , which motivated this work. The matrix representation $M$ of the linear transformation $T$ is called the k-commutator matrix. $M$ has interesting properties, e.g., it is a commutator; for $k$ odd, there is a permutation of the rows of $M$ that makes it skew-symmetric. For both $k$ and $n$ odd, a provocative matrix $\mathcal{S}$ appears in the kernel of $T$. By using the Moore--Penrose inverse and introducing a conjecture about the rank of $M$, the entries of $\mathcal{S}$ are shown to be quotients of polynomials in the entries of the matrices in $\mathfrak{A}$. One case of the conjecture has been recently proven by Brassil. The Moore--Penrose inverse provides a full rank decomposition of $M$.


2021 ◽  
pp. 1-24
Author(s):  
Hannes Leeb ◽  
Lukas Steinberger

Abstract We study linear subset regression in the context of the high-dimensional overall model $y = \vartheta +\theta ' z + \epsilon $ with univariate response y and a d-vector of random regressors z, independent of $\epsilon $ . Here, “high-dimensional” means that the number d of available explanatory variables is much larger than the number n of observations. We consider simple linear submodels where y is regressed on a set of p regressors given by $x = M'z$ , for some $d \times p$ matrix M of full rank $p < n$ . The corresponding simple model, that is, $y=\alpha +\beta ' x + e$ , is usually justified by imposing appropriate restrictions on the unknown parameter $\theta $ in the overall model; otherwise, this simple model can be grossly misspecified in the sense that relevant variables may have been omitted. In this paper, we establish asymptotic validity of the standard F-test on the surrogate parameter $\beta $ , in an appropriate sense, even when the simple model is misspecified, that is, without any restrictions on $\theta $ whatsoever and without assuming Gaussian data.


2021 ◽  
Author(s):  
Ichio Kikuchi ◽  
Akihito Kikuchi

In this essay, we examine the feasibility of quantum computation of Groebner basis which is a fundamental tool of algebraic geometry. The classical method for computing Groebner basis is based on Buchberger's algorithm, and our question is how to adopt quantum algorithm there. A Quantum algorithm for finding the maximum is usable for detecting head terms of polynomials, which are required for the computation of S-polynomials. The reduction of S-polynomials with respect to a Groebner basis could be done by the quantum version of Gauss-Jordan elimination of echelon which represents polynomials. However, the frequent occurrence of zero-reductions of polynomials is an obstacle to the effective application of quantum algorithms. This is because zero-reductions of polynomials occur in non-full-rank echelons, for which quantum linear systems algorithms (through the inversion of matrices) are inadequate, as ever-known quantum linear solvers (such as Harrow-Hassidim-Lloyd) require the clandestine computations of the inverses of eigenvalues. Hence, for the quantum computation of the Groebner basis, the schemes to suppress the zero-reductions are necessary. To this end, the F5 algorithm or its variant (F5C) would be the most promising, as these algorithms have countermeasures against the occurrence of zero-reductions and can construct full-rank echelons whenever the inputs are regular sequences. Between these two algorithms, the F5C is the better match for algorithms involving the inversion of matrices.


Author(s):  
Atsushi Yaguchi ◽  
Taiji Suzuki ◽  
Shuhei Nitta ◽  
Yukinobu Sakata ◽  
Akiyuki Tanizawa

Compressing DNNs is important for the real-world applications operating on resource-constrained devices. However, we typically observe drastic performance deterioration when changing model size after training is completed. Therefore, retraining is required to resume the performance of the compressed models suitable for different devices. In this paper, we propose Decomposable-Net (the network decomposable in any size), which allows flexible changes to model size without retraining. We decompose weight matrices in the DNNs via singular value decomposition and adjust ranks according to the target model size. Unlike the existing low-rank compression methods that specialize the model to a fixed size, we propose a novel backpropagation scheme that jointly minimizes losses for both of full- and low-rank networks. This enables not only to maintain the performance of a full-rank network {\it without retraining} but also to improve low-rank networks in multiple sizes. Additionally, we introduce a simple criterion for rank selection that effectively suppresses approximation error. In experiments on the ImageNet classification task, Decomposable-Net yields superior accuracy in a wide range of model sizes. In particular, Decomposable-Net achieves the top-1 accuracy of 73.2% with 0.27xMACs with ResNet-50, compared to Tucker decomposition (67.4% / 0.30x), Trained Rank Pruning (70.6% / 0.28x), and universally slimmable networks (71.4% / 0.26x).


2021 ◽  
Vol 24 (4) ◽  
pp. 1257-1274
Author(s):  
Wojciech P. Hunek ◽  
Tomasz Feliks

Abstract The advanced analytical study in the field of fractional-order non-full rank inverse model control design is presented in the paper. Following the recent results in this matter it is certain, that the inverse model control-oriented perfect control law can be established for the non-full rank integer-order systems being under the discrete-time state-space reference with zero value. It is shown here, that the perfect control paradigm can be extended to cover the multivariable non-full rank plants governed by the more general Grünwald-Letnikov discrete-time state-space model. Indeed, the postulated approach significantly reduces both iterative and non-iterative computational effort, mainly derived from the approximation of the Moore-Penrose inverse of the non-full rank matrices to finally be inverted. A prevention provided by the new method excludes the detrimental matrix behavior in the form of singularity, often avoided due to the observed ill-conditioned sensitivity. Thus, the new defined robust fractional-order non-full rank instance of such control strategy, supported by the pole-free mechanism, gives rise to the introduction of the general unified non-full rank perfect control-originated theory. Numerical algorithms with simulation investigation clearly confirm the innovative peculiarities provided by the manuscript.


Author(s):  
Gerandy Brito ◽  
Ioana Dumitriu ◽  
Kameron Decker Harris

Abstract We prove an analogue of Alon’s spectral gap conjecture for random bipartite, biregular graphs. We use the Ihara–Bass formula to connect the non-backtracking spectrum to that of the adjacency matrix, employing the moment method to show there exists a spectral gap for the non-backtracking matrix. A by-product of our main theorem is that random rectangular zero-one matrices with fixed row and column sums are full rank with high probability. Finally, we illustrate applications to community detection, coding theory, and deterministic matrix completion.


Sign in / Sign up

Export Citation Format

Share Document