scholarly journals Emulated retinal image capture (ERICA) to test, train and validate processing of retinal images

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Laura K. Young ◽  
Hannah E. Smithson

AbstractHigh resolution retinal imaging systems, such as adaptive optics scanning laser ophthalmoscopes (AOSLO), are increasingly being used for clinical research and fundamental studies in neuroscience. These systems offer unprecedented spatial and temporal resolution of retinal structures in vivo. However, a major challenge is the development of robust and automated methods for processing and analysing these images. We present ERICA (Emulated Retinal Image CApture), a simulation tool that generates realistic synthetic images of the human cone mosaic, mimicking images that would be captured by an AOSLO, with specified image quality and with corresponding ground-truth data. The simulation includes a self-organising mosaic of photoreceptors, the eye movements an observer might make during image capture, and data capture through a real system incorporating diffraction, residual optical aberrations and noise. The retinal photoreceptor mosaics generated by ERICA have a similar packing geometry to human retina, as determined by expert labelling of AOSLO images of real eyes. In the current implementation ERICA outputs convincingly realistic en face images of the cone photoreceptor mosaic but extensions to other imaging modalities and structures are also discussed. These images and associated ground-truth data can be used to develop, test and validate image processing and analysis algorithms or to train and validate machine learning approaches. The use of synthetic images has the advantage that neither access to an imaging system, nor to human participants is necessary for development.

2021 ◽  
Author(s):  
Laura K Young ◽  
Hannah E Smithson

ABSTRACTHigh resolution retinal imaging systems, such as adaptive optics scanning laser ophthalmoscopes (AOSLO), are increasingly being used for clinical and fundamental studies in neuroscience. These systems offer unprecedented spatial and temporal resolution of retinal structures in vivo. However, a major challenge is the development of robust and automated methods for processing and analysing these images. We present ERICA (Emulated Retinal Image CApture), a simulation tool that generates realistic synthetic images of the human cone mosaic, mimicking images that would be captured by an AOSLO, with specified image quality and with corresponding ground truth data. The simulation includes a self-organising mosaic of photoreceptors, the eye movements an observer might make during image capture, and data capture through a real system incorporating diffraction, residual optical aberrations and noise. The retinal photoreceptor mosaics generated by ERICA have a similar packing geometry to human retina, as determined by expert labelling of AOSLO images of real eyes. In the current implementation ERICA outputs convincingly realistic en face images of the cone photoreceptor mosaic but extensions to other imaging modalities and structures are also discussed. These images and associated ground-truth data can be used to develop, test and validate image processing and analysis algorithms or to train and validate machine learning approaches. The use of synthetic images has the advantage that neither access to an imaging system, nor to human participants is necessary for development.


2017 ◽  
Vol 10 (01) ◽  
pp. 1650038 ◽  
Author(s):  
Junlei Zhao ◽  
Fei Xiao ◽  
Jian Kang ◽  
Haoxin Zhao ◽  
Yun Dai ◽  
...  

It is necessary to know the distribution of the Chinese eye’s aberrations in clinical environment to guide high-resolution retinal imaging system design for large Chinese population application. We collected the monochromatic wave aberration of 332 healthy eyes and 344 diseased eyes in Chinese population across a 6.0-mm pupil. The aberration statistics of Chinese eyes including healthy eyes and diseased eyes were analyzed, and some differences of aberrations between the Chinese and European race were concluded. On this basis, the requirement for adaptive optics (AO) correction of the Chinese eye’s monochromatic aberrations was analyzed. The result showed that a stroke of 20[Formula: see text][Formula: see text]m and ability to correct aberrations up to the 8th Zernike order were needed for reflective wavefront correctors to achieve near diffraction-limited imaging in both groups for a reference wavelength of 550[Formula: see text]nm and a pupil diameter of 6.0[Formula: see text]mm. To verify the analysis mentioned above, an AO flood-illumination system was established, and high-resolution retinal imaging in vivo was achieved for Chinese eye including both healthy and diseased eyes.


10.29007/3lks ◽  
2019 ◽  
Author(s):  
Axel Tanner ◽  
Martin Strohmeier

Anomalies in the airspace can provide an indicator of critical events and changes which go beyond aviation. Devising techniques, which can detect abnormal patterns can provide intelligence and information ranging from weather to political events. This work presents our latest findings in detecting such anomalies in air traffic patterns using ADS-B data provided by the OpenSky network [8]. After discussion of specific problems in anomaly detection in air traffic data, we show an experiment in a regional setting, evaluating air traffic densities with the Gini index, and a second experiment investigating the runway use at Zurich airport. In the latter case, strong available ground truth data allows to better understand and confirm findings of different learning approaches.


2016 ◽  
Author(s):  
Roshni Cooper ◽  
Shaul Yogev ◽  
Kang Shen ◽  
Mark Horowitz

AbstractMotivation:Microtubules (MTs) are polarized polymers that are critical for cell structure and axonal transport. They form a bundle in neurons, but beyond that, their organization is relatively unstudied.Results:We present MTQuant, a method for quantifying MT organization using light microscopy, which distills three parameters from MT images: the spacing of MT minus-ends, their average length, and the average number of MTs in a cross-section of the bundle. This method allows for robust and rapid in vivo analysis of MTs, rendering it more practical and more widely applicable than commonly-used electron microscopy reconstructions. MTQuant was successfully validated with three ground truth data sets and applied to over 3000 images of MTs in a C. elegans motor neuron.Availability:MATLAB code is available at http://roscoope.github.io/MTQuantContact:[email protected] informationSupplementary data are available at Bioinformatics online.


Author(s):  
Aziah Ali ◽  
Wan Mimi Diyana Wan Zaki ◽  
Aini Hussain

<span>Segmentation of blood vessels (BVs) from retinal image is one of the important steps in developing a computer-assisted retinal diagnosis system and has been widely researched especially for implementing automatic BV segmentation methods. This paper proposes an improvement to an existing retinal BV (RBV) segmentation method by combining the trainable B-COSFIRE filter with adaptive thresholding methods. The proposed method can automatically configure its selectivity given a prototype pattern to be detected. Its segmentation performance is comparable to many published methods with the advantage of robustness against noise on retinal background. Instead of using grid search to find the optimal threshold value for a whole dataset, adaptive thresholding (AT) is used to determine the threshold for each retinal image. Two AT methods investigated in this study were ISODATA and Otsu’s method. The proposed method was validated using 40 images from two benchmark datasets for retinal BV segmentation validation, namely DRIVE and STARE. The validation results indicated that the segmentation performance of the proposed unsupervised method is comparable to the original B-COSFIRE method and other published methods, without requiring the availability of ground truth data for new dataset. The Sensitivity and Specificity values achieved for DRIVE and STARE are 0.7818, 0.9688, 0.7957 and 0.9648, respectively.</span>


2020 ◽  
Vol 12 (3) ◽  
pp. 422 ◽  
Author(s):  
Rehman S. Eon ◽  
Charles M. Bachmann ◽  
Christopher S. Lapszynski ◽  
Anna Christina Tyler ◽  
Sarah Goldsmith

This work describes a study using multi-view hyperspectral imagery to retrieve sediment filling factor through inversion of a modified version of the Hapke radiative transfer model. We collected multi-view hyperspectral imagery from a hyperspectral imaging system mounted atop a telescopic mast from multiple locations and viewing angles of a salt panne on a barrier island at the Virginia Coast Reserve Long-Term Ecological Research site. We also collected ground truth data, including sediment bulk density and moisture content, within the common field of view of the collected hyperspectral imagery. For samples below a density threshold for coherent effects, originally predicted by Hapke, the retrieved sediment filling factor correlates well with directly measured sediment bulk density ( R 2 = 0.85 ). The majority of collected samples satisfied this condition. The onset of the threshold occurs at significantly higher filling factors than Hapke’s predictions for dry sediments because the salt panne sediment has significant moisture content. We applied our validated inversion model to successfully map sediment filling factor across the common region of overlap of the multi-view hyperspectral imagery of the salt panne.


2021 ◽  
Vol 13 (9) ◽  
pp. 1697
Author(s):  
Alexander Jenal ◽  
Hubert Hüging ◽  
Hella Ellen Ahrends ◽  
Andreas Bolten ◽  
Jens Bongartz ◽  
...  

UAV-based multispectral multi-camera systems are widely used in scientific research for non-destructive crop traits estimation to optimize agricultural management decisions. These systems typically provide data from the visible and near-infrared (VNIR) domain. However, several key absorption features related to biomass and nitrogen (N) are located in the short-wave infrared (SWIR) domain. Therefore, this study investigates a novel multi-camera system prototype that addresses this spectral gap with a sensitivity from 600 to 1700 nm by implementing dedicated bandpass filter combinations to derive application-specific vegetation indices (VIs). In this study, two VIs, GnyLi and NRI, were applied using data obtained on a single observation date at a winter wheat field experiment located in Germany. Ground truth data were destructively sampled for the entire growing season. Likewise, crop heights were derived from UAV-based RGB image data using an improved approach developed within this study. Based on these variables, regression models were derived to estimate fresh and dry biomass, crop moisture, N concentration, and N uptake. The relationships between the NIR/SWIR-based VIs and the estimated crop traits were successfully evaluated (R2: 0.57 to 0.66). Both VIs were further validated against the sampled ground truth data (R2: 0.75 to 0.84). These results indicate the imaging system’s potential for monitoring crop traits in agricultural applications, but further multitemporal validations are needed.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Pierre Yger ◽  
Giulia LB Spampinato ◽  
Elric Esposito ◽  
Baptiste Lefebvre ◽  
Stéphane Deny ◽  
...  

In recent years, multielectrode arrays and large silicon probes have been developed to record simultaneously between hundreds and thousands of electrodes packed with a high density. However, they require novel methods to extract the spiking activity of large ensembles of neurons. Here, we developed a new toolbox to sort spikes from these large-scale extracellular data. To validate our method, we performed simultaneous extracellular and loose patch recordings in rodents to obtain ‘ground truth’ data, where the solution to this sorting problem is known for one cell. The performance of our algorithm was always close to the best expected performance, over a broad range of signal-to-noise ratios, in vitro and in vivo. The algorithm is entirely parallelized and has been successfully tested on recordings with up to 4225 electrodes. Our toolbox thus offers a generic solution to sort accurately spikes for up to thousands of electrodes.


2021 ◽  
Author(s):  
Sanket Kadulkar ◽  
Michael Howard ◽  
Thomas Truskett ◽  
Venkat Ganesan

<pre>We develop a convolutional neural network (CNN) model to predict the diffusivity of cations in nanoparticle-based electrolytes, and use it to identify the characteristics of morphologies which exhibit optimal transport properties. The ground truth data is obtained from kinetic Monte Carlo (kMC) simulations of cation transport parameterized using a multiscale modeling strategy. We implement deep learning approaches to quantitatively link the diffusivity of cations to the spatial arrangement of the nanoparticles. We then integrate the trained CNN model with a topology optimization algorithm for accelerated discovery of nanoparticle morphologies that exhibit optimal cation diffusivities at a specified nanoparticle loading, and we investigate the ability of the CNN model to quantitatively account for the influence of interparticle spatial correlations on cation diffusivity. Finally, using data-driven approaches, we explore how simple descriptors of nanoparticle morphology correlate with cation diffusivity, thus providing a physical rationale for the observed optimal microstructures. The results of this study highlight the capability of CNNs to serve as surrogate models for structure--property relationships in composites with monodisperse spherical particles, which can in turn be used with inverse methods to discover morphologies that produce optimal target properties.</pre>


2016 ◽  
Author(s):  
Ryan Poplin ◽  
Pi-Chuan Chang ◽  
David Alexander ◽  
Scott Schwartz ◽  
Thomas Colthurst ◽  
...  

AbstractNext-generation sequencing (NGS) is a rapidly evolving set of technologies that can be used to determine the sequence of an individual’s genome1 by calling genetic variants present in an individual using billions of short, errorful sequence reads2. Despite more than a decade of effort and thousands of dedicated researchers, the hand-crafted and parameterized statistical models used for variant calling still produce thousands of errors and missed variants in each genome3,4. Here we show that a deep convolutional neural network5 can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships (likelihoods) between images of read pileups around putative variant sites and ground-truth genotype calls. This approach, called DeepVariant, outperforms existing tools, even winning the “highest performance” award for SNPs in a FDA-administered variant calling challenge. The learned model generalizes across genome builds and even to other mammalian species, allowing non-human sequencing projects to benefit from the wealth of human ground truth data. We further show that, unlike existing tools which perform well on only a specific technology, DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, from deep whole genomes from 10X Genomics to Ion Ampliseq exomes. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data.


Sign in / Sign up

Export Citation Format

Share Document