scholarly journals The nonsmooth landscape of phase retrieval

2020 ◽  
Vol 40 (4) ◽  
pp. 2652-2695
Author(s):  
Damek Davis ◽  
Dmitriy Drusvyatskiy ◽  
Courtney Paquette

Abstract We consider a popular nonsmooth formulation of the real phase retrieval problem. We show that under standard statistical assumptions a simple subgradient method converges linearly when initialized within a constant relative distance of an optimal solution. Seeking to understand the distribution of the stationary points of the problem, we complete the paper by proving that as the number of Gaussian measurements increases, the stationary points converge to a codimension two set, at a controlled rate. Experiments on image recovery problems illustrate the developed algorithm and theory.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rujia Li ◽  
Liangcai Cao

AbstractPhase retrieval seeks to reconstruct the phase from the measured intensity, which is an ill-posed problem. A phase retrieval problem can be solved with physical constraints by modulating the investigated complex wavefront. Orbital angular momentum has been recently employed as a type of reliable modulation. The topological charge l is robust during propagation when there is atmospheric turbulence. In this work, topological modulation is used to solve the phase retrieval problem. Topological modulation offers an effective dynamic range of intensity constraints for reconstruction. The maximum intensity value of the spectrum is reduced by a factor of 173 under topological modulation when l is 50. The phase is iteratively reconstructed without a priori knowledge. The stagnation problem during the iteration can be avoided using multiple topological modulations.


2021 ◽  
Vol 11 (2) ◽  
pp. 721
Author(s):  
Hyung Yong Kim ◽  
Ji Won Yoon ◽  
Sung Jun Cheon ◽  
Woo Hyun Kang ◽  
Nam Soo Kim

Recently, generative adversarial networks (GANs) have been successfully applied to speech enhancement. However, there still remain two issues that need to be addressed: (1) GAN-based training is typically unstable due to its non-convex property, and (2) most of the conventional methods do not fully take advantage of the speech characteristics, which could result in a sub-optimal solution. In order to deal with these problems, we propose a progressive generator that can handle the speech in a multi-resolution fashion. Additionally, we propose a multi-scale discriminator that discriminates the real and generated speech at various sampling rates to stabilize GAN training. The proposed structure was compared with the conventional GAN-based speech enhancement algorithms using the VoiceBank-DEMAND dataset. Experimental results showed that the proposed approach can make the training faster and more stable, which improves the performance on various metrics for speech enhancement.


2010 ◽  
Vol 47 (8) ◽  
pp. 081001
Author(s):  
廖天河 Liao Tianhe ◽  
高穹 Gao Qiong ◽  
崔远峰 Cui Yuanfeng ◽  
宋凯洋 Song Kaiyang

1981 ◽  
Vol 28 (6) ◽  
pp. 735-738 ◽  
Author(s):  
J.G. Walker

Author(s):  
Leng Ningyi ◽  
Yuan Ziyang ◽  
Yang Haoxing ◽  
Hongxia Wang ◽  
Du Longkun

2019 ◽  
Vol 17 (05) ◽  
pp. 773-818 ◽  
Author(s):  
Yi Xu ◽  
Qihang Lin ◽  
Tianbao Yang

In this paper, a new theory is developed for first-order stochastic convex optimization, showing that the global convergence rate is sufficiently quantified by a local growth rate of the objective function in a neighborhood of the optimal solutions. In particular, if the objective function [Formula: see text] in the [Formula: see text]-sublevel set grows as fast as [Formula: see text], where [Formula: see text] represents the closest optimal solution to [Formula: see text] and [Formula: see text] quantifies the local growth rate, the iteration complexity of first-order stochastic optimization for achieving an [Formula: see text]-optimal solution can be [Formula: see text], which is optimal at most up to a logarithmic factor. To achieve the faster global convergence, we develop two different accelerated stochastic subgradient methods by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Besides the theoretical improvements, this work also includes new contributions toward making the proposed algorithms practical: (i) we present practical variants of accelerated stochastic subgradient methods that can run without the knowledge of multiplicative growth constant and even the growth rate [Formula: see text]; (ii) we consider a broad family of problems in machine learning to demonstrate that the proposed algorithms enjoy faster convergence than traditional stochastic subgradient method. We also characterize the complexity of the proposed algorithms for ensuring the gradient is small without the smoothness assumption.


Sign in / Sign up

Export Citation Format

Share Document