scholarly journals Large-scale phase retrieval

eLight ◽  
2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Xuyang Chang ◽  
Liheng Bian ◽  
Jun Zhang

AbstractHigh-throughput computational imaging requires efficient processing algorithms to retrieve multi-dimensional and multi-scale information. In computational phase imaging, phase retrieval (PR) is required to reconstruct both amplitude and phase in complex space from intensity-only measurements. The existing PR algorithms suffer from the tradeoff among low computational complexity, robustness to measurement noise and strong generalization on different modalities. In this work, we report an efficient large-scale phase retrieval technique termed as LPR. It extends the plug-and-play generalized-alternating-projection framework from real space to nonlinear complex space. The alternating projection solver and enhancing neural network are respectively derived to tackle the measurement formation and statistical prior regularization. This framework compensates the shortcomings of each operator, so as to realize high-fidelity phase retrieval with low computational complexity and strong generalization. We applied the technique for a series of computational phase imaging modalities including coherent diffraction imaging, coded diffraction pattern imaging, and Fourier ptychographic microscopy. Extensive simulations and experiments validate that the technique outperforms the existing PR algorithms with as much as 17dB enhancement on signal-to-noise ratio, and more than one order-of-magnitude increased running efficiency. Besides, we for the first time demonstrate ultra-large-scale phase retrieval at the 8K level ($$7680\times 4320$$ 7680 × 4320 pixels) in minute-level time.

2019 ◽  
Author(s):  
Kelvin C. M. Lee ◽  
Andy K. S. Lau ◽  
Anson H. L. Tang ◽  
Maolin Wang ◽  
Aaron T. Y. Mok ◽  
...  

AbstractA growing body of evidence has substantiated the significance of quantitative phase imaging (QPI) in enabling cost-effective and label-free cellular assay, which provides useful insights into understanding biophysical properties of cells and their roles in cellular functions. However, available QPI modalities are limited by the loss of imaging resolution at high throughput and thus run short of sufficient statistical power at the single cell precision to define cell identities in a large and heterogeneous population of cells – hindering their utility in mainstream biomedicine and biology. Here we present a new QPI modality, coined multi-ATOM that captures and processes quantitative label-free single-cell images at ultra-high throughput without compromising sub-cellular resolution. We show that multi-ATOM, based upon ultrafast phase-gradient encoding, outperforms state-of-the-art QPI in permitting robust phase retrieval at a QPI throughput of >10,000 cell/sec, bypassing the need for interferometry which inevitably compromises QPI quality under ultrafast operation. We employ multi-ATOM for large-scale, label-free, multi-variate, cell-type classification (e.g. breast cancer sub-types, and leukemic cells versus peripheral blood mononuclear cells) at high accuracy (>94%). Our results suggest that multi-ATOM could empower new strategies in large-scale biophysical single-cell analysis with applications in biology and enriching disease diagnostics.


2011 ◽  
Vol 2011 ◽  
pp. 1-10 ◽  
Author(s):  
Kazunobu Kondo ◽  
Yu Takahashi ◽  
Seiichi Hashimoto ◽  
Hiroshi Saruwatari ◽  
Takanori Nishino ◽  
...  

A blind speech separation method with low computational complexity is proposed. This method consists of a combination of independent component analysis with frequency band selection, and a frame-wise spectral soft mask method based on an interchannel power ratio of tentative separated signals in the frequency domain. The soft mask cancels the transfer function between sources and separated signals. A theoretical analysis of selection criteria and the soft mask is given. Performance and effectiveness are evaluated via source separation simulations and a computational estimate, and experimental results show the significantly improved performance of the proposed method. The segmental signal-to-noise ratio achieves 7 [dB] and 3 [dB], and the cepstral distortion achieves 1 [dB] and 2.5 [dB], in anechoic and reverberant conditions, respectively. Moreover, computational complexity is reduced by more than 80% compared with unmodified FDICA.


2021 ◽  
Vol 28 (4) ◽  
Author(s):  
Max Langer ◽  
Yuhe Zhang ◽  
Diogo Figueirinhas ◽  
Jean-Baptiste Forien ◽  
Kannara Mom ◽  
...  

X-ray propagation-based imaging techniques are well established at synchrotron radiation and laboratory sources. However, most reconstruction algorithms for such image modalities, also known as phase-retrieval algorithms, have been developed specifically for one instrument by and for experts, making the development and diffusion of such techniques difficult. Here, PyPhase, a free and open-source package for propagation-based near-field phase reconstructions, which is distributed under the CeCILL license, is presented. PyPhase implements some of the most popular phase-retrieval algorithms in a highly modular framework supporting its deployment on large-scale computing facilities. This makes the integration, the development of new phase-retrieval algorithms, and the deployment on different computing infrastructures straightforward. Its capabilities and simplicity are presented by application to data acquired at the synchrotron source MAX IV (Lund, Sweden).


Author(s):  
W. Baumeister ◽  
R. Rachel ◽  
R. Guckenberger ◽  
R. Hegerl

IntroductionCorrelation averaging (CAV) is meanwhile an established technique in image processing of two-dimensional crystals /1,2/. The basic idea is to detect the real positions of unit cells in a crystalline array by means of correlation functions and to average them by real space superposition of the aligned motifs. The signal-to-noise ratio improves in proportion to the number of motifs included in the average. Unlike filtering in the Fourier domain, CAV corrects for lateral displacements of the unit cells; thus it avoids the loss of resolution entailed by these distortions in the conventional approach. Here we report on some variants of the method, aimed at retrieving a maximum of information from images with very low signal-to-noise ratios (low dose microscopy of unstained or lightly stained specimens) while keeping the procedure economical.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Chih-Chuen Lin ◽  
Phani Motamarri ◽  
Vikram Gavini

AbstractWe present a tensor-structured algorithm for efficient large-scale density functional theory (DFT) calculations by constructing a Tucker tensor basis that is adapted to the Kohn–Sham Hamiltonian and localized in real-space. The proposed approach uses an additive separable approximation to the Kohn–Sham Hamiltonian and an L1 localization technique to generate the 1-D localized functions that constitute the Tucker tensor basis. Numerical results show that the resulting Tucker tensor basis exhibits exponential convergence in the ground-state energy with increasing Tucker rank. Further, the proposed tensor-structured algorithm demonstrated sub-quadratic scaling with system-size for both systems with and without a gap, and involving many thousands of atoms. This reduced-order scaling has also resulted in the proposed approach outperforming plane-wave DFT implementation for systems beyond 2000 electrons.


2021 ◽  
Vol 502 (3) ◽  
pp. 3976-3992
Author(s):  
Mónica Hernández-Sánchez ◽  
Francisco-Shu Kitaura ◽  
Metin Ata ◽  
Claudio Dalla Vecchia

ABSTRACT We investigate higher order symplectic integration strategies within Bayesian cosmic density field reconstruction methods. In particular, we study the fourth-order discretization of Hamiltonian equations of motion (EoM). This is achieved by recursively applying the basic second-order leap-frog scheme (considering the single evaluation of the EoM) in a combination of even numbers of forward time integration steps with a single intermediate backward step. This largely reduces the number of evaluations and random gradient computations, as required in the usual second-order case for high-dimensional cases. We restrict this study to the lognormal-Poisson model, applied to a full volume halo catalogue in real space on a cubical mesh of 1250 h−1 Mpc side and 2563 cells. Hence, we neglect selection effects, redshift space distortions, and displacements. We note that those observational and cosmic evolution effects can be accounted for in subsequent Gibbs-sampling steps within the COSMIC BIRTH algorithm. We find that going from the usual second to fourth order in the leap-frog scheme shortens the burn-in phase by a factor of at least ∼30. This implies that 75–90 independent samples are obtained while the fastest second-order method converges. After convergence, the correlation lengths indicate an improvement factor of about 3.0 fewer gradient computations for meshes of 2563 cells. In the considered cosmological scenario, the traditional leap-frog scheme turns out to outperform higher order integration schemes only when considering lower dimensional problems, e.g. meshes with 643 cells. This gain in computational efficiency can help to go towards a full Bayesian analysis of the cosmological large-scale structure for upcoming galaxy surveys.


Sign in / Sign up

Export Citation Format

Share Document