scholarly journals Synthetic OCT data in challenging conditions: three-dimensional OCT and presence of abnormalities

Author(s):  
Hajar Danesh ◽  
Keivan Maghooli ◽  
Alireza Dehghani ◽  
Rahele Kafieh

AbstractNowadays, retinal optical coherence tomography (OCT) plays an important role in ophthalmology and automatic analysis of the OCT is of real importance: image denoising facilitates a better diagnosis and image segmentation and classification are undeniably critical in treatment evaluation. Synthetic OCT was recently considered to provide a benchmark for quantitative comparison of automatic algorithms and to be utilized in the training stage of novel solutions based on deep learning. Due to complicated data structure in retinal OCTs, a limited number of delineated OCT datasets are already available in presence of abnormalities; furthermore, the intrinsic three-dimensional (3D) structure of OCT is ignored in many public 2D datasets. We propose a new synthetic method, applicable to 3D data and feasible in presence of abnormalities like diabetic macular edema (DME). In this method, a limited number of OCT data is used during the training step and the Active Shape Model is used to produce synthetic OCTs plus delineation of retinal boundaries and location of abnormalities. Statistical comparison of thickness maps showed that synthetic dataset can be used as a statistically acceptable representative of the original dataset (p > 0.05). Visual inspection of the synthesized vessels was also promising. Regarding the texture features of the synthesized datasets, Q-Q plots were used, and even in cases that the points have slightly digressed from the straight line, the p-values of the Kolmogorov–Smirnov test rejected the null hypothesis and showed the same distribution in texture features of the real and the synthetic data. The proposed algorithm provides a unique benchmark for comparison of OCT enhancement methods and a tailored augmentation method to overcome the limited number of OCTs in deep learning algorithms. Graphical abstract

2021 ◽  
Vol 11 (24) ◽  
pp. 11938
Author(s):  
Denis Zherdev ◽  
Larisa Zherdeva ◽  
Sergey Agapov ◽  
Anton Sapozhnikov ◽  
Artem Nikonorov ◽  
...  

Human poses and the behaviour estimation for different activities in (virtual reality/augmented reality) VR/AR could have numerous beneficial applications. Human fall monitoring is especially important for elderly people and for non-typical activities with VR/AR applications. There are a lot of different approaches to improving the fidelity of fall monitoring systems through the use of novel sensors and deep learning architectures; however, there is still a lack of detail and diverse datasets for training deep learning fall detectors using monocular images. The issues with synthetic data generation based on digital human simulation were implemented and examined using the Unreal Engine. The proposed pipeline provides automatic “playback” of various scenarios for digital human behaviour simulation, and the result of a proposed modular pipeline for synthetic data generation of digital human interaction with the 3D environments is demonstrated in this paper. We used the generated synthetic data to train the Mask R-CNN-based segmentation of the falling person interaction area. It is shown that, by training the model with simulation data, it is possible to recognize a falling person with an accuracy of 97.6% and classify the type of person’s interaction impact. The proposed approach also allows for covering a variety of scenarios that can have a positive effect at a deep learning training stage in other human action estimation tasks in an VR/AR environment.


2021 ◽  
Author(s):  
Claudia Emde ◽  
Huan Yu ◽  
Arve Kylling ◽  
Michel van Roozendael ◽  
Kerstin Stebel ◽  
...  

Abstract. Retrievals of trace gas concentrations from satellite observations are mostly performed for clear regions or regions with low cloud coverage. However, even fully clear pixels can be affected by clouds in the vicinity, either by shadowing or by scattering of radiation from clouds in the clear region. Quantifying the error of retrieved trace gas concentrations due to cloud scattering is a difficult task. One possibility is to generate synthetic data by three-dimensional (3D) radiative transfer simulations using realistic 3D atmospheric input data, including 3D cloud structures. Retrieval algorithms may be applied on the synthetic data and comparison to the known input trace gas concentrations yields the retrieval error due to cloud scattering. In this paper we present a comprehensive synthetic dataset which has been generated using the Monte Carlo radiative transfer model MYSTIC. The dataset includes simulated spectra in two spectral ranges (400–500 nm and the O2A-band from 755–775 nm). Moreover it includes layer air mass factors (layer-AMF) calculated at 460 nm. All simulations are performed for a fixed background atmosphere for various sun positions, viewing directions and surface albedos. Two cloud setups are considered: The first includes simple box-clouds with various geometrical and optical thicknesses. This can be used to systematically investigate the sensitivity of the retrieval error on solar zenith angle, surface albedo and cloud parameters. Corresponding 1D simulations are also provided. The second includes realistic three-dimensional clouds from an ICON large eddy simulation (LES) for a region covering Germany and parts of surrounding countries. The scene includes cloud types typical for central Europe such as shallow cumulus, convective cloud cells, cirrus, and stratocumulus. This large dataset can be used to quantify the trace gas concentration retrieval error statistically. Along with the dataset the impact of horizontal photon transport on reflectance spectra and layer-AMFs is analyzed for the box-cloud scenarios. Moreover, the impact of 3D cloud scattering on the NO2 vertical column density (VCD) retrieval is presented for a specific LES case. We find that the retrieval error is largest in cloud shadow regions, where the NO2 VCD is underestimated by more than 20 %. The dataset is available for the scientific community to assess the behavior of trace gas retrieval algorithms and cloud correction schemes in cloud conditions with 3D structure.


Author(s):  
S. Fedorova ◽  
A. Tono ◽  
M. S. Nigam ◽  
J. Zhang ◽  
A. Ahmadnia ◽  
...  

Abstract. With the growing interest in deep learning algorithms and computational design in the architectural field, the need for large, accessible and diverse architectural datasets increases. Due to the complexity of such 3D datasets, the most widespread techniques of 3D scanning and manual building modeling are very time-consuming, which does not allow to have a sufficiently large open-source dataset. We decided to tackle this problem by constructing a field-specific synthetic data generation pipeline that generates an arbitrary amount of 3D data along with the associated 2D and 3D annotations. The variety of annotations, the flexibility to customize the generated building and dataset parameters make this framework suitable for multiple deep learning tasks, including geometric deep learning that requires direct 3D supervision. Creating our building data generation pipeline we leveraged the experts’ architectural knowledge in order to construct a framework that would be modular, extendable and would provide a sufficient amount of class-balanced data samples. Moreover, we purposefully involve the researcher in the dataset customization allowing the introduction of additional building components, material textures, building classes, number and type of annotations as well as the number of views per 3D model sample. In this way, the framework would satisfy different research requirements and would be adaptable to a large variety of tasks. All code and data is made publicly available: https://cdinstitute.github.io/Building-Dataset-Generator/.


2019 ◽  
Author(s):  
Max Highsmith ◽  
Oluwatosin Oluwadare ◽  
Jianlin Cheng

AbstractMotivationThe three-dimensional (3D) organization of an organism’s genome and chromosomes plays a significant role in many biological processes. Currently, methods exist for modeling chromosomal 3D structure using contact matrices generated via chromosome conformation capture (3C) techniques such as Hi-C. However, the effectiveness of these methods is inherently bottlenecked by the quality of the Hi-C data, which may be corrupted by experimental noise. Consequently, it is valuable to develop methods for eliminating the impact of noise on the quality of reconstructed structures.ResultsWe develop unsupervised and semi-supervised deep learning algorithms (i.e. deep convolutional autoencoders) to denoise Hi-C contact matrix data and improve the quality of chromosome structure predictions. When applied to noisy synthetic contact matrices of the yeast genome, our network demonstrates consistent improvement across metrics for contact matrix similarity including: Pearson Correlation, Spearman Correlation and Signal-to-Noise Ratio. Positive improvement across these metrics is seen consistently across a wide space of parameters to both gaussian and poisson noise [email protected] and [email protected]


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Kenneth W. Dunn ◽  
Chichen Fu ◽  
David Joon Ho ◽  
Soonam Lee ◽  
Shuo Han ◽  
...  

AbstractThe scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kh Tohidul Islam ◽  
Sudanthi Wijewickrema ◽  
Stephen O’Leary

AbstractImage registration is a fundamental task in image analysis in which the transform that moves the coordinate system of one image to another is calculated. Registration of multi-modal medical images has important implications for clinical diagnosis, treatment planning, and image-guided surgery as it provides the means of bringing together complimentary information obtained from different image modalities. However, since different image modalities have different properties due to their different acquisition methods, it remains a challenging task to find a fast and accurate match between multi-modal images. Furthermore, due to reasons such as ethical issues and need for human expert intervention, it is difficult to collect a large database of labelled multi-modal medical images. In addition, manual input is required to determine the fixed and moving images as input to registration algorithms. In this paper, we address these issues and introduce a registration framework that (1) creates synthetic data to augment existing datasets, (2) generates ground truth data to be used in the training and testing of algorithms, (3) registers (using a combination of deep learning and conventional machine learning methods) multi-modal images in an accurate and fast manner, and (4) automatically classifies the image modality so that the process of registration can be fully automated. We validate the performance of the proposed framework on CT and MRI images of the head obtained from a publicly available registration database.


2021 ◽  
Author(s):  
Tristan Meynier Georges ◽  
Maria Anna Rapsomaniki

Recent studies have revealed the importance of three-dimensional (3D) chromatin structure in the regulation of vital biological processes. Contrary to protein folding, no experimental procedure that can directly determine ground-truth 3D chromatin coordinates exists. Instead, chromatin conformation is studied implicitly using high-throughput chromosome conformation capture (Hi-C) methods that quantify the frequency of all pairwise chromatin contacts. Computational methods that infer the 3D chromatin structure from Hi-C data are thus unsupervised, and limited by the assumption that contact frequency determines Euclidean distance. Inspired by recent developments in deep learning, in this work we explore the idea of transfer learning to address the crucial lack of ground-truth data for 3D chromatin structure inference. We present a novel method, Transfer learning Encoder for CHromatin 3D structure prediction (TECH-3D) that combines transfer learning with creative data generation procedures to reconstruct chromatin structure. Our work outperforms previous deep learning attempts for chromatin structure inference and exhibits similar results as state-of-the-art algorithms on many tests, without making any assumptions on the relationship between contact frequencies and Euclidean distances. Above all, TECH-3D presents a highly creative and novel approach, paving the way for future deep learning models.


Author(s):  
Masayuki Eguchi ◽  
Akira Kawamura ◽  
Kazuya Tomiyama ◽  
Omachi Shinichiro

It is important to maintain safety and ride quality for toll expressway users in Japan. However, since porous asphalt became the standard road surface, spot defects have gradually spread nationwide. To deal with the problem, this research attempted to develop a less costly but effective way of identifying surface defects. Since transverse data for rutting measurement was the only basic data available for general road profilers, first, quasi-three-dimensional (3D) profile data was successfully obtained by deleting gradient effects on the profiles in both the transverse and longitudinal directions. Among other elements, the standard deviation (SD) of the quasi-3D profile height using spot defect size was best matched for identifying spot defects, including pumping of underlying layer materials of the pavement. To improve the efficiency of detecting spot surface defects, deep learning was examined by converting the SD values into visual images. As a result, it was verified that a simplified classification with basic color information of red, green, and blue gave practically the same engineering judgement. Finally, this method of identifying irregularly emerging target defects using deep learning was validated by relearning the target visuals. A good result with high accuracy was achieved with just 150 images for each defect level. This approach may be universally applied anywhere surface profilers are used.


Author(s):  
B. Erdnüß

<p><strong>Abstract.</strong> There is a fundamental relationship between projective geometry and the perspective imaging geometry of a pinhole camera. Projective scales have been used to measure within images from the beginnings of photogrammetry, mostly the cross-ratio on a straight line. However, there are also projective frames in the plane with interesting connections to affine and projective geometry in three dimensional space that can be utilized for photogrammetry. This article introduces an invariant on the projective plane, describes its relation to affine geometry, and how to use it to reduce the complexity of projective transformations. It describes how the invariant can be use to measure on projectively distorted planes in images and shows applications to this in 3D reconstruction. The article follows two central ideas. One is to measure coordinates in an image relatively to each other to gain as much invariance of the viewport as possible. The other is to use the remaining variance to determine the 3D structure of the scene and to locate the camera centers. For this, the images are projected onto a common plane in the scene. 3D structure not on the plane occludes different parts of the plane in the images. From this, the position of the cameras and the 3D structure are obtained.</p>


Author(s):  
Sara Cuéllar ◽  
Paulo Granados ◽  
Ernesto Fabregas ◽  
Michel Curé ◽  
Hector Vargas ◽  
...  

Scientists and astronomers have attached Scientists and astronomers have attached great importance to the task of discovering new exoplanets, even more so if they are in the habitable zone. To date, more than 4300 exoplanets have been confirmed by NASA, using various discovery techniques, including planetary transits, in addition to the use of various databases provided by space and ground-based telescopes. This article proposes the development of a deep learning system for detecting planetary transits in Kepler Telescope lightcurves. The approach is based on related work from the literature and enhanced to validation with real lightcurves. A CNN classification model is trained from a mixture of real and synthetic data, and validated only with real data and different from those used in the training stage. The best ratio of synthetic data is determined by the perform of an optimisation technique and a sensitivity analysis. The precision, accuracy and true positive rate of the best model obtained are determined and compared with other similar works. The results demonstrate that the use of synthetic data on the training stage can improve the transit detection performance on real light curves.


Sign in / Sign up

Export Citation Format

Share Document