Least singular value, circular law, and Lindeberg exchange

Author(s):  
Terence Tao
2008 ◽  
Vol 10 (02) ◽  
pp. 261-307 ◽  
Author(s):  
TERENCE TAO ◽  
VAN VU

Let x be a complex random variable with mean zero and bounded variance σ2. Let Nn be a random matrix of order n with entries being i.i.d. copies of x. Let λ1, …, λn be the eigenvalues of [Formula: see text]. Define the empirical spectral distributionμn of Nn by the formula [Formula: see text] The following well-known conjecture has been open since the 1950's: Circular Law Conjecture: μn converges to the uniform distribution μ∞ over the unit disk as n tends to infinity. We prove this conjecture, with strong convergence, under the slightly stronger assumption that the (2 + η)th-moment of x is bounded, for any η > 0. Our method builds and improves upon earlier work of Girko, Bai, Götze–Tikhomirov, and Pan–Zhou, and also applies for sparse random matrices. The new key ingredient in the paper is a general result about the least singular value of random matrices, which was obtained using tools and ideas from additive combinatorics.


2018 ◽  
Vol 07 (01) ◽  
pp. 1750014 ◽  
Author(s):  
Kyle Luh

Let [Formula: see text] where [Formula: see text] are iid copies of a mean zero, variance one, subgaussian random variable. Let [Formula: see text] be an [Formula: see text] random matrix with entries that are iid copies of [Formula: see text]. We prove that there exists a [Formula: see text] such that the probability that [Formula: see text] has any real eigenvalues is less than [Formula: see text] where [Formula: see text] only depends on the subgaussian moment of [Formula: see text]. The bound is optimal up to the value of the constant [Formula: see text]. The principal component of the proof is an optimal tail bound on the least singular value of matrices of the form [Formula: see text] where [Formula: see text] is a deterministic complex matrix with the condition that [Formula: see text] for some constant [Formula: see text] depending on the subgaussian moment of [Formula: see text]. For this class of random variables, this result improves on the results of Pan–Zhou [Circular law, extreme singular values and potential theory, J. Multivariate Anal. 101(3) (2010) 645–656] and Rudelson–Vershynin [The Littlewood–Offord problem and invertibility of random matrices, Adv. Math. 218(2) (2008) 600–633]. In the proof of the tail bound, we develop an optimal small-ball probability bound for complex random variables that generalizes the Littlewood–Offord theory developed by Tao–Vu [From the Littlewood–Offord problem to the circular law: Universality of the spectral distribution of random matrices, Bull. Amer. Math. Soc.[Formula: see text]N.S.[Formula: see text] 46(3) (2009) 377–396; Inverse Littlewood–Offord theorems and the condition number of random discrete matrices, Ann. of Math.[Formula: see text] 169(2) (2009) 595–632] and Rudelson–Vershynin [The Littlewood–Offord problem and invertibility of random matrices, Adv. Math. 218(2) (2008) 600–633; Smallest singular value of a random rectangular matrix, Comm. Pure Appl. Math. 62(12) (2009) 1707–1739].


Author(s):  
Vishesh Jain

Let [Formula: see text] be an [Formula: see text] complex random matrix, each of whose entries is an independent copy of a centered complex random variable [Formula: see text] with finite nonzero variance [Formula: see text]. The strong circular law, proved by Tao and Vu, states that almost surely, as [Formula: see text], the empirical spectral distribution of [Formula: see text] converges to the uniform distribution on the unit disc in [Formula: see text]. A crucial ingredient in the proof of Tao and Vu, which uses deep ideas from additive combinatorics, is controlling the lower tail of the least singular value of the random matrix [Formula: see text] (where [Formula: see text] is fixed) with failure probability that is inverse polynomial. In this paper, using a simple and novel approach (in particular, not using machinery from additive combinatorics or any net arguments), we show that for any fixed complex matrix [Formula: see text] with operator norm at most [Formula: see text] and for all [Formula: see text], [Formula: see text] where [Formula: see text] is the least singular value of [Formula: see text] and [Formula: see text] are positive absolute constants. Our result is optimal up to the constants [Formula: see text] and the inverse exponential-type error rate improves upon the inverse polynomial error rate due to Tao and Vu. Our proof relies on the solution to the so-called counting problem in inverse Littlewood–Offord theory, developed by Ferber, Luh, Samotij, and the author, a novel complex anti-concentration inequality, and a “rounding trick” based on controlling the [Formula: see text] operator norm of heavy-tailed random matrices.


CALCOLO ◽  
2003 ◽  
Vol 40 (4) ◽  
pp. 213-229 ◽  
Author(s):  
C. Fassino

2008 ◽  
Vol 346 (15-16) ◽  
pp. 893-896 ◽  
Author(s):  
Mark Rudelson ◽  
Roman Vershynin

2020 ◽  
Vol 17 (3) ◽  
pp. 172988142093204
Author(s):  
Jingyu Sun ◽  
Yanjun Liu ◽  
Chen Ji

To address the Jacobian matrix approximation error, which usually exists in the iterative solving process of the classic singular robust inverse method, the correction coefficient α is introduced, and the improved singular robust inverse method is the result. On this basis, the constant improved singular robust method and the intelligent improved singular robust inverse method are proposed. In addition, a new scheme, combining particle swarm optimization and artificial neural network training, is applied to obtain real-time parameters. The stability of the proposed methods is verified according to the Lyapunov stability criteria, and the effectiveness is verified in the application examples of spatial linear and curve trajectories with a seven-axis manipulator. The simulation results show that the improved singular robust inverse method has better optimization performance and stability. In the allowable range, the terminal error is smallest, and there is no lasting oscillation or large amplitude. The least singular value is largest, and the joint angular velocity is smallest, exactly as expected. The derivative of the Lyapunov function is negative definite. Comparing the two extended methods, the constant improved singular robust method performs better in terms of joint angular velocity and least singular value optimization, and the intelligent improved singular robust inverse method can achieve a smaller terminal error. There is little difference between their overall optimization effects. However, the adaptability of the real-time parameters makes the intelligent improved singular robust inverse method the first choice for kinematic control of redundant serial manipulators.


Sign in / Sign up

Export Citation Format

Share Document