scholarly journals Imaging Correlography Using Ptychography

2019 ◽  
Vol 9 (20) ◽  
pp. 4377
Author(s):  
Li ◽  
Wen ◽  
Song ◽  
Jiang ◽  
Zhang ◽  
...  

Imaging correlography, an effective method for long-distance imaging, recovers an object using only the knowledge of the Fourier modulus, without needing phase information. It is not sensitive to atmospheric turbulence or optical imperfections. However, the unreliability of traditional phase retrieval algorithms in imaging correlography has hindered their development. In this work, we join imaging correlography and ptychography together to overcome such obstacles. Instead of detecting the whole object, the object is measured part-by-part with a probe moving in a ptychographic way. A flexible optimization framework is proposed to reconstruct the object rapidly and reliably within a few iterations. In addition, novel image space denoising regularization is plugged into the loss function to reduce the effects of input noise and improve the perceptual quality of the recovered image. Experiments demonstrate that four-fold resolution gains are achievable for the proposed imaging method. We can obtain satisfactory results for both visual and quantitative metrics with one-sixth of the measurements in the conventional imaging correlography. Therefore, the proposed imaging technique is more suitable for long-range practical applications.

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3154 ◽  
Author(s):  
Zhixin Li ◽  
Desheng Wen ◽  
Zongxi Song ◽  
Gang Liu ◽  
Weikang Zhang ◽  
...  

Imaging past the diffraction limit is of significance to an optical system. Fourier ptychography (FP) is a novel coherent imaging technique that can achieve this goal and it is widely used in microscopic imaging. Most phase retrieval algorithms for FP reconstruction are based on Gaussian measurements which cannot extend straightforwardly to long range, sub-diffraction imaging setup because of laser speckle noise corruption. In this work, a new FP reconstruction framework is proposed for macroscopic visible imaging. When compared with existing research, the reweighted amplitude flow algorithm is adopted for better signal modeling, and the Regularization by Denoising (RED) scheme is introduced to reduce the effects of speckle. Experiments demonstrate that the proposed method can obtain state-of-the-art recovered results on both visual and quantitative metrics without increasing computation cost, and it is flexible for real imaging applications.


2020 ◽  
Vol 71 (7) ◽  
pp. 868-880
Author(s):  
Nguyen Hong-Quan ◽  
Nguyen Thuy-Binh ◽  
Tran Duc-Long ◽  
Le Thi-Lan

Along with the strong development of camera networks, a video analysis system has been become more and more popular and has been applied in various practical applications. In this paper, we focus on person re-identification (person ReID) task that is a crucial step of video analysis systems. The purpose of person ReID is to associate multiple images of a given person when moving in a non-overlapping camera network. Many efforts have been made to person ReID. However, most of studies on person ReID only deal with well-alignment bounding boxes which are detected manually and considered as the perfect inputs for person ReID. In fact, when building a fully automated person ReID system the quality of the two previous steps that are person detection and tracking may have a strong effect on the person ReID performance. The contribution of this paper are two-folds. First, a unified framework for person ReID based on deep learning models is proposed. In this framework, the coupling of a deep neural network for person detection and a deep-learning-based tracking method is used. Besides, features extracted from an improved ResNet architecture are proposed for person representation to achieve a higher ReID accuracy. Second, our self-built dataset is introduced and employed for evaluation of all three steps in the fully automated person ReID framework.


Author(s):  
Mourad Talbi ◽  
Med Salim Bouhlel

Background: In this paper, we propose a secure image watermarking technique which is applied to grayscale and color images. It consists in applying the SVD (Singular Value Decomposition) in the Lifting Wavelet Transform domain for embedding a speech image (the watermark) into the host image. Methods: It also uses signature in the embedding and extraction steps. Its performance is justified by the computation of PSNR (Pick Signal to Noise Ratio), SSIM (Structural Similarity), SNR (Signal to Noise Ratio), SegSNR (Segmental SNR) and PESQ (Perceptual Evaluation Speech Quality). Results: The PSNR and SSIM are used for evaluating the perceptual quality of the watermarked image compared to the original image. The SNR, SegSNR and PESQ are used for evaluating the perceptual quality of the reconstructed or extracted speech signal compared to the original speech signal. Conclusion: The Results obtained from computation of PSNR, SSIM, SNR, SegSNR and PESQ show the performance of the proposed technique.


2021 ◽  
pp. 030573562098727
Author(s):  
Pedro Neto ◽  
Patricia M Vanzella

We report an experiment in which participants ( N = 368) were asked to differentiate between major and minor thirds. These intervals could either be formed by diatonic tones from the C major scale (tonal condition) or by a subset of tones from the chromatic scale (atonal condition). We hypothesized that in the tonal condition intervals would be perceived as a function of scale step distances, which we defined as the number of diatonic leaps between two notes of a given music scale. In the atonal condition, we hypothesized that intervals would be perceived as a function of cents. If our hypotheses were supported, we should verify a less accurate performance in the tonal condition, where scale step distances are the same between major and minor thirds. The data corroborated our hypotheses, and we suggest that acoustic measurements of intervallic distances (i.e., frequency ratios and cents) are not optimal when it comes to describing the perceptual quality of intervals in a tonal context. Finally, our research points to the possibility that, in comparison with previous models, scale steps and cents might better capture the notion of global versus local instances of auditory processing.


1966 ◽  
Vol 19 (2) ◽  
pp. 169-186 ◽  
Author(s):  
P. G. Reich

In the first part of this series of papers an outline was given of the approach made at the Royal Aircraft Establishment to the problems of estimating collision risk and of specifying the quality of navigation needed to make separation standards safe. It was stressed that estimates should be based on intensive observation of flying errors, rather than on speculative theories, and that it is more feasible to develop ‘upper limit’ estimating techniques than those which purport to give the exact risk. In summary, a list of seven ‘requirements’ was given, as a reminder of the essential principles which can so easily be overlooked in the piecemeal task of relating separation standards to collision risk.The purpose of this paper is to show some of the theoretical techniques which have been developed at R.A.E. to satisfy five of these requirements. (The remaining two do not call for special techniques and will be dealt with when practical applications are described in Part III.) The paper contains three Appendixes, dealing with the frequency of losing separation in one dimension, the computation of P's from the assumed tail shapes, and the treatment of relative errors. These are not included here but will appear in the off-printed version which may be obtained from the Royal Aircraft Establishment.Both this paper and the paper that follows by Mr. Attwooll are crown copyright and are reproduced with the permission of H.M. Stationery Office.


2015 ◽  
Vol 98 (12) ◽  
pp. 8572-8576 ◽  
Author(s):  
Emily M. Darchuk ◽  
Lisbeth Meunier-Goddik ◽  
Joy Waite-Cusic

2015 ◽  
Vol 22 (4) ◽  
pp. 14-28 ◽  
Author(s):  
Jingxi Xu ◽  
Benjamin W. Wah

Sign in / Sign up

Export Citation Format

Share Document