geometric convergence
Recently Published Documents


TOTAL DOCUMENTS

88
(FIVE YEARS 9)

H-INDEX

14
(FIVE YEARS 1)

Author(s):  
Lucas Ambrozio ◽  
Reto Buzano ◽  
Alessandro Carlotto ◽  
Ben Sharp

AbstractWe present some geometric applications, of global character, of the bubbling analysis developed by Buzano and Sharp for closed minimal surfaces, obtaining smooth multiplicity one convergence results under upper bounds on the Morse index and suitable lower bounds on either the genus or the area. For instance, we show that given any Riemannian metric of positive scalar curvature on the three-dimensional sphere the class of embedded minimal surfaces of index one and genus $$\gamma $$ γ is sequentially compact for any $$\gamma \ge 1$$ γ ≥ 1 . Furthemore, we give a quantitative description of how the genus drops as a sequence of minimal surfaces converges smoothly, with mutiplicity $$m\ge 1$$ m ≥ 1 , away from finitely many points where curvature concentration may happen. This result exploits a sharp estimate on the multiplicity of convergence in terms of the number of ends of the bubbles that appear in the process.


2021 ◽  
Vol 65 (4) ◽  
Author(s):  
Juan Gao ◽  
Xin-Wei Liu ◽  
Yu-Hong Dai ◽  
Yakui Huang ◽  
Peng Yang

Entropy ◽  
2020 ◽  
Vol 22 (2) ◽  
pp. 222
Author(s):  
Haobo Li ◽  
Ning Cai

Based on Arimoto’s work in 1972, we propose an iterative algorithm for computing the capacity of a discrete memoryless classical-quantum channel with a finite input alphabet and a finite dimensional output, which we call the Blahut–Arimoto algorithm for classical-quantum channel, and an input cost constraint is considered. We show that, to reach ε accuracy, the iteration complexity of the algorithm is upper bounded by log n log ε ε where n is the size of the input alphabet. In particular, when the output state { ρ x } x ∈ X is linearly independent in complex matrix space, the algorithm has a geometric convergence. We also show that the algorithm reaches an ε accurate solution with a complexity of O ( m 3 log n log ε ε ) , and O ( m 3 log ε log ( 1 − δ ) ε D ( p * | | p N 0 ) ) in the special case, where m is the output dimension, D ( p * | | p N 0 ) is the relative entropy of two distributions, and δ is a positive number. Numerical experiments were performed and an approximate solution for the binary two-dimensional case was analysed.


2020 ◽  
Vol 53 (2) ◽  
pp. 3367-3372
Author(s):  
Tatiana Tatarenko ◽  
Angelia Nedić

Author(s):  
Qilong Gu ◽  
Arindam Banerjee

Recent years have seen advances in optimizing large scale statistical estimation problems. In statistical learning settings iterative optimization algorithms have been shown to enjoy geometric convergence. While powerful, such results only hold for the original dataset, and may face computational challenges when the sample size is large. In this paper, we study sketched iterative algorithms, in particular sketched-PGD (projected gradient descent) and sketched-SVRG (stochastic variance reduced gradient) for structured generalized linear model, and illustrate that these methods continue to have geometric convergence to the statistical error under suitable assumptions. Moreover, the sketching dimension is allowed to be even smaller than the ambient dimension, thus can lead to significant speed-ups. The sketched iterative algorithms introduced provide an additional dimension to study the trade-offs between statistical accuracy and time.


2019 ◽  
Vol 6 ◽  
pp. 621-664 ◽  
Author(s):  
Lucas Ambrozio ◽  
Reto Buzano ◽  
Alessandro Carlotto ◽  
Ben Sharp

Sign in / Sign up

Export Citation Format

Share Document