Quality assessment for adaptive optics image post-processing by LoG domain matching

2018 ◽  
Vol 47 (11) ◽  
pp. 1111005
Author(s):  
牛 威 Niu Wei ◽  
郭世平 Guo Shiping ◽  
史江林 Shi Jianglin ◽  
邹建华 Zou Jianhua ◽  
张荣之 Zhang Rongzhi
2015 ◽  
Author(s):  
Shiping Guo ◽  
Rongzhi Zhang ◽  
Jisheng Li ◽  
Jianhua Zou ◽  
Changhai Liu ◽  
...  

2008 ◽  
Author(s):  
Joseph Janni ◽  
Stuart Jefferies

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Danuta M. Sampson ◽  
David Alonso-Caneiro ◽  
Avenell L. Chew ◽  
Jonathan La ◽  
Danial Roshandel ◽  
...  

AbstractAdaptive optics flood illumination ophthalmoscopy (AO-FIO) is an established imaging tool in the investigation of retinal diseases. However, the clinical interpretation of AO-FIO images can be challenging due to varied image quality. Therefore, image quality assessment is essential before interpretation. An image assessment tool will also assist further work on improving the image quality, either during acquisition or post processing. In this paper, we describe, validate and compare two automated image quality assessment methods; the energy of Laplacian focus operator (LAPE; not commonly used but easily implemented) and convolutional neural network (CNN; effective but more complex approach). We also evaluate the effects of subject age, axial length, refractive error, fixation stability, disease status and retinal location on AO-FIO image quality. Based on analysis of 10,250 images of 50 × 50 μm size, at 41 retinal locations, from 50 subjects we demonstrate that CNN slightly outperforms LAPE in image quality assessment. CNN achieves accuracy of 89%, whereas LAPE metric achieves 73% and 80% (for a linear regression and random forest multiclass classifier methods, respectively) compared to ground truth. Furthermore, the retinal location, age and disease are factors that can influence the likelihood of poor image quality.


2020 ◽  
Vol 638 ◽  
pp. A98
Author(s):  
F. Cantalloube ◽  
O. J. D. Farley ◽  
J. Milli ◽  
N. Bharmal ◽  
W. Brandner ◽  
...  

Context. The wind-driven halo is a feature that is observed in images that were delivered by the latest generation of ground-based instruments that are equipped with an extreme adaptive optics system and a coronagraphic device, such as SPHERE at the Very Large Telescope (VLT). This signature appears when the atmospheric turbulence conditions vary faster than the adaptive optics loop can correct for. The wind-driven halo is observed as a radial extension of the point spread function along a distinct direction (this is sometimes referred to as the butterfly pattern). When this is present, it significantly limits the contrast capabilities of the instrument and prevents the extraction of signals at close separation or extended signals such as circumstellar disks. This limitation is consequential because it contaminates the data for a substantial fraction of the time: about 30% of the data produced by the VLT/SPHERE instrument are affected by the wind-driven halo. Aims. This paper reviews the causes of the wind-driven halo and presents a method for analyzing its contribution directly from the scientific images. Its effect on the raw contrast and on the final contrast after post-processing is demonstrated. Methods. We used simulations and on-sky SPHERE data to verify that the parameters extracted with our method can describe the wind-driven halo in the images. We studied the temporal, spatial, and spectral variation of these parameters to point out its deleterious effect on the final contrast. Results. The data-driven analysis we propose provides information to accurately describe the wind-driven halo contribution in the images. This analysis confirms that this is a fundamental limitation of the finally reached contrast performance. Conclusions. With the established procedure, we will analyze a large sample of data delivered by SPHERE in order to propose post-processing techniques that are tailored to removing the wind-driven halo.


2021 ◽  
Author(s):  
Lea Waller ◽  
Susanne Erk ◽  
Elena Pozzi ◽  
Yara J. Toenders ◽  
Courtney C. Haswell ◽  
...  

The reproducibility crisis in neuroimaging has led to an increased demand for standardized data processing workflows. Within the ENIGMA consortium, we developed HALFpipe (Harmonized AnaLysis of Functional MRI pipeline), an open-source, containerized, user-friendly tool that facilitates reproducible analysis of task-based and resting-state fMRI data through uniform application of preprocessing, quality assessment, single-subject feature extraction, and group-level statistics. It provides state-of-the-art preprocessing using fMRIPrep without the requirement for input data in Brain Imaging Data Structure (BIDS) format. HALFpipe extends the functionality of fMRIPrep with additional preprocessing steps, which include spatial smoothing, grand mean scaling, temporal filtering, and confound regression. HALFpipe generates an interactive quality assessment (QA) webpage to assess the quality of key preprocessing outputs and raw data in general. HALFpipe features myriad post-processing functions at the individual subject level, including calculation of task-based activation, seed-based connectivity, network-template (or dual) regression, atlas-based functional connectivity matrices, regional homogeneity (ReHo), and fractional amplitude of low frequency fluctuations (fALFF), offering support to evaluate a combinatorial number of features or preprocessing settings in one run. Finally, flexible factorial models can be defined for mixed-effects regression analysis at the group level, including multiple comparison correction. Here, we introduce the theoretical framework in which HALFpipe was developed, and present an overview of the main functions of the pipeline. HALFpipe offers the scientific community a major advance toward addressing the reproducibility crisis in neuroimaging, providing a workflow that encompasses preprocessing, post-processing, and QA of fMRI data, while broadening core principles of data analysis for producing reproducible results. Instructions and code can be found at https://github.com/HALFpipe/HALFpipe.


2020 ◽  
Vol 496 (4) ◽  
pp. 4209-4220 ◽  
Author(s):  
R J-L Fétick ◽  
L M Mugnier ◽  
T Fusco ◽  
B Neichel

ABSTRACT One of the major limitations of using adaptive optics (AO) to correct image post-processing is the lack of knowledge about the system’s point spread function (PSF). The PSF is not always available as direct imaging on isolated point-like objects, such as stars. The use of AO telemetry to predict the PSF also suffers from serious limitations and requires complex and yet not fully operational algorithms. A very attractive solution is to estimate the PSF directly from the scientific images themselves, using blind or myopic post-processing approaches. We demonstrate that such approaches suffer from severe limitations when a joint restitution of object and PSF parameters is performed. As an alternative, here we propose a marginalized PSF identification that overcomes this limitation. In this case, the PSF is used for image post-processing. Here we focus on deconvolution, a post-processing technique to restore the object, given the image and the PSF. We show that the PSF estimated by marginalization provides good-quality deconvolution. The full process of marginalized PSF estimation and deconvolution constitutes a successful blind deconvolution technique. It is tested on simulated data to measure its performance. It is also tested on experimental AO images of the asteroid 4-Vesta taken by the Spectro-Polarimetric High-contrast Exoplanet Research (SPHERE)/Zurich Imaging Polarimeter (Zimpol) on the Very Large Telescope to demonstrate application to on-sky data.


Sign in / Sign up

Export Citation Format

Share Document