scholarly journals 2D-to-3D image translation of complex nanoporous volumes using generative networks

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Timothy I. Anderson ◽  
Bolivia Vega ◽  
Jesse McKinzie ◽  
Saman A. Aryana ◽  
Anthony R. Kovscek

AbstractImage-based characterization offers a powerful approach to studying geological porous media at the nanoscale and images are critical to understanding reactive transport mechanisms in reservoirs relevant to energy and sustainability technologies such as carbon sequestration, subsurface hydrogen storage, and natural gas recovery. Nanoimaging presents a trade off, however, between higher-contrast sample-destructive and lower-contrast sample-preserving imaging modalities. Furthermore, high-contrast imaging modalities often acquire only 2D images, while 3D volumes are needed to characterize fully a source rock sample. In this work, we present deep learning image translation models to predict high-contrast focused ion beam-scanning electron microscopy (FIB-SEM) image volumes from transmission X-ray microscopy (TXM) images when only 2D paired training data is available. We introduce a regularization method for improving 3D volume generation from 2D-to-2D deep learning image models and apply this approach to translate 3D TXM volumes to FIB-SEM fidelity. We then segment a predicted FIB-SEM volume into a flow simulation domain and calculate the sample apparent permeability using a lattice Boltzmann method (LBM) technique. Results show that our image translation approach produces simulation domains suitable for flow visualization and allows for accurate characterization of petrophysical properties from non-destructive imaging data.

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xinyang Li ◽  
Guoxun Zhang ◽  
Hui Qiao ◽  
Feng Bao ◽  
Yue Deng ◽  
...  

AbstractThe development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.


Author(s):  
Vu Tuan Hai ◽  
Dang Thanh Vu ◽  
Huynh Ho Thi Mong Trinh ◽  
Pham The Bao

Recent advances in deep learning models have shown promising potential in object removal, which is the task of replacing undesired objects with appropriate pixel values using known context. Object removal-based deep learning can commonly be solved by modeling it as the Img2Img (image to image) translation or Inpainting. Instead of dealing with a large context, this paper aims at a specific application of object removal, that is, erasing braces trace out of an image having teeth with braces (called braces2teeth problem). We solved the problem by three methods corresponding to different datasets. Firstly, we use the CycleGAN model to deal with the problem that paired training data is not available. In the second case, we try to create pseudo-paired data to train the Pix2Pix model. In the last case, we utilize GraphCut combining generative inpainting model to build a user-interactive tool that can improve the result in case the user is not satisfied with previous results. To our best knowledge, this study is one of the first attempts to take the braces2teeth problem into account by using deep learning techniques and it can be applied in various fields, from health care to entertainment.


2019 ◽  
Vol 621 ◽  
pp. A59 ◽  
Author(s):  
T. Stolker ◽  
M. J. Bonse ◽  
S. P. Quanz ◽  
A. Amara ◽  
G. Cugno ◽  
...  

Context. The direct detection and characterization of planetary and substellar companions at small angular separations is a rapidly advancing field. Dedicated high-contrast imaging instruments deliver unprecedented sensitivity, enabling detailed insights into the atmospheres of young low-mass companions. In addition, improvements in data reduction and point spread function (PSF)-subtraction algorithms are equally relevant for maximizing the scientific yield, both from new and archival data sets. Aims. We aim at developing a generic and modular data-reduction pipeline for processing and analysis of high-contrast imaging data obtained with pupil-stabilized observations. The package should be scalable and robust for future implementations and particularly suitable for the 3–5 μm wavelength range where typically thousands of frames have to be processed and an accurate subtraction of the thermal background emission is critical. Methods. PynPoint is written in Python 2.7 and applies various image-processing techniques, as well as statistical tools for analyzing the data, building on open-source Python packages. The current version of PynPoint has evolved from an earlier version that was developed as a PSF-subtraction tool based on principal component analysis (PCA). Results. The architecture of PynPoint has been redesigned with the core functionalities decoupled from the pipeline modules. Modules have been implemented for dedicated processing and analysis steps, including background subtraction, frame registration, PSF subtraction, photometric and astrometric measurements, and estimation of detection limits. The pipeline package enables end-to-end data reduction of pupil-stabilized data and supports classical dithering and coronagraphic data sets. As an example, we processed archival VLT/NACO L′ and M′ data of β Pic b and reassessed the brightness and position of the planet with a Markov chain Monte Carlo analysis; we also provide a derivation of the photometric error budget.


2005 ◽  
Vol 38 (6) ◽  
pp. 2368-2375 ◽  
Author(s):  
Nick Virgilio ◽  
Basil D. Favis ◽  
Marie-France Pépin ◽  
Patrick Desjardins ◽  
Gilles L'Espérance

2018 ◽  
Vol 14 (S345) ◽  
pp. 318-319
Author(s):  
M. Mugrauer ◽  
C. Ginski ◽  
N. Vogt ◽  
R. Neuhäuser

AbstractWe carried out a high contrast imaging search for (sub)stellar companions of young pre-main sequence stars in the Lupus star forming region. For this project we utilized NACO/ESO-VLT, operated at the Paranal observatory. On this poster, we presented the results of this survey. In several observing campaigns we could obtain diffraction limited deep IR imaging data and detected faint co-moving companions around our targets, whose astro- and photometry was determined in all observing epochs. The co-moving companions found in our survey exhibit angular separations in the range between about 0.1 and a few arcsecs, i.e. projected separations between about 10 and a few hundreds of au, at the average distance of our targets of about 140 pc. Beside several new binary and triple star systems, whose multiplicity was revealed in this survey, also faint co-moving companions in the substellar mass regime could be identified close to some of our targets.


2020 ◽  
Vol 642 ◽  
pp. A18
Author(s):  
A. M. Lagrange ◽  
P. Rubini ◽  
M. Nowak ◽  
S. Lacour ◽  
A. Grandjean ◽  
...  

Context. The nearby and young β Pictoris system hosts a well resolved disk, a directly imaged massive giant planet orbiting at ≃9 au, as well as an inner planet orbiting at ≃2.7 au, which was recently detected through radial velocity (RV). As such, it offers several unique opportunities for detailed studies of planetary system formation and early evolution. Aims. We aim to further constrain the orbital and physical properties of β Pictoris b and c using a combination of high contrast imaging, long base-line interferometry, and RV data. We also predict the closest approaches or the transit times of both planets, and we constrain the presence of additional planets in the system. Methods. We obtained six additional epochs of SPHERE data, six additional epochs of GRAVITY data, and five additional epochs of RV data. We combined these various types of data in a single Markov-chain Monte Carlo analysis to constrain the orbital parameters and masses of the two planets simultaneously. The analysis takes into account the gravitational influence of both planets on the star and hence their relative astrometry. Secondly, we used the RV and high contrast imaging data to derive the probabilities of presence of additional planets throughout the disk, and we tested the impact of absolute astrometry. Results. The orbital properties of both planets are constrained with a semi-major axis of 9.8 ± 0.4 au and 2.7 ± 0.02 au for b and c, respectively, and eccentricities of 0.09 ± 0.1 and 0.27 ± 0.07, assuming the HIPPARCOS distance. We note that despite these low fitting error bars, the eccentricity of β Pictoris c might still be over-estimated. If no prior is provided on the mass of β Pictoris b, we obtain a very low value that is inconsistent with what is derived from brightness-mass models. When we set an evolutionary model motivated prior to the mass of β Pictoris b, we find a solution in the 10–11 MJup range. Conversely, β Pictoris c’s mass is well constrained, at 7.8 ± 0.4 MJup, assuming both planets are on coplanar orbits. These values depend on the assumptions on the distance of the β Pictoris system. The absolute astrometry HIPPARCOS-Gaia data are consistent with the solutions presented here at the 2σ level, but these solutions are fully driven by the relative astrometry plus RV data. Finally, we derive unprecedented limits on the presence of additional planets in the disk. We can now exclude the presence of planets that are more massive than about 2.5 MJup closer than 3 au, and more massive than 3.5 MJup between 3 and 7.5 au. Beyond 7.5 au, we exclude the presence of planets that are more massive than 1–2 MJup. Conclusions. Combining relative astrometry and RVs allows one to precisely constrain the orbital parameters of both planets and to give lower limits to potential additional planets throughout the disk. The mass of β Pictoris c is also well constrained, while additional RV data with appropriate observing strategies are required to properly constrain the mass of β Pictoris b.


2019 ◽  
Vol 631 ◽  
pp. A107 ◽  
Author(s):  
S. Peretti ◽  
D. Ségransan ◽  
B. Lavie ◽  
S. Desidera ◽  
A.-L. Maire ◽  
...  

Context. The study of high-contrast imaged brown dwarfs and exoplanets depends strongly on evolutionary models. To estimate the mass of a directly imaged substellar object, its extracted photometry or spectrum is used and adjusted with model spectra together with the estimated age of the system. These models still need to be properly tested and constrained. HD 4747B is a brown dwarf close to the H burning mass limit, orbiting a nearby (d = 19.25 ± 0.58 pc), solar-type star (G9V); it has been observed with the radial velocity method for over almost two decades. Its companion was also recently detected by direct imaging, allowing a complete study of this particular object. Aims. We aim to fully characterize HD 4747B by combining a well-constrained dynamical mass and a study of its observed spectral features in order to test evolutionary models for substellar objects and to characterize its atmosphere. Methods. We combined the radial velocity measurements of High Resolution Echelle Spectrometer (HIRES) and CORALIE taken over two decades and high-contrast imaging of several epochs from NACO, NIRC2, and SPHERE to obtain a dynamical mass. From the SPHERE data we obtained a low-resolution spectrum of the companion from Y to H band, and two narrow band-width photometric measurements in the K band. A study of the primary star also allowed us to constrain the age of the system and its distance. Results. Thanks to the new SPHERE epoch and NACO archival data combined with previous imaging data and high-precision radial velocity measurements, we were able to derive a well-constrained orbit. The high eccentricity (e = 0.7362 ± 0.0025) of HD 4747B is confirmed, and the inclination and the semi-major axis are derived (i = 47.3 ± 1.6°, a = 10.01 ± 0.21 au). We derive a dynamical mass of mB = 70.0 ± 1.6 MJup, which is higher than a previous study but in better agreement with the models. By comparing the object with known brown dwarfs spectra, we derive a spectral type of L9 and an effective temperature of 1350 ± 50 K. With a retrieval analysis we constrain the oxygen and carbon abundances and compare them with the values from the HR 8799 planets.


2018 ◽  
Vol 611 ◽  
pp. A23 ◽  
Author(s):  
S. Hunziker ◽  
S. P. Quanz ◽  
A. Amara ◽  
M. R. Meyer

Aims.Ground-based observations at thermal infrared wavelengths suffer from large background radiation due to the sky, telescope and warm surfaces in the instrument. This significantly limits the sensitivity of ground-based observations at wavelengths longer than ~3 μm. The main purpose of this work is to analyse this background emission in infrared high-contrast imaging data as illustrative of the problem, show how it can be modelled and subtracted and demonstrate that it can improve the detection of faint sources, such as exoplanets. Methods. We used principal component analysis (PCA) to model and subtract the thermal background emission in three archival high-contrast angular differential imaging datasets in the M′ and L′ filter. We used an M′ dataset of β Pic to describe in detail how the algorithm works and explain how it can be applied. The results of the background subtraction are compared to the results from a conventional mean background subtraction scheme applied to the same dataset. Finally, both methods for background subtraction are compared by performing complete data reductions. We analysed the results from the M′ dataset of HD 100546 only qualitatively. For the M′ band dataset of β Pic and the L′ band dataset of HD 169142, which was obtained with an angular groove phase mask vortex vector coronagraph, we also calculated and analysed the achieved signal-to-noise ratio (S/N). Results. We show that applying PCA is an effective way to remove spatially and temporarily varying thermal background emission down to close to the background limit. The procedure also proves to be very successful at reconstructing the background that is hidden behind the point spread function. In the complete data reductions, we find at least qualitative improvements for HD 100546 and HD 169142, however, we fail to find a significant increase in S/N of β Pic b. We discuss these findings and argue that in particular datasets with strongly varying observing conditions or infrequently sampled sky background will benefit from the new approach.


2021 ◽  
Author(s):  
Daniel Cai ◽  
Abbas Roayaei Ardakany ◽  
Ferhat Ay

Autoimmune blistering diseases (AIBDs) are rare, chronic disorders of the skin and mucous membranes, with a broad spectrum of clinical manifestations and morphological lesions. Considering that 1) diagnosis of AIBDs is a challenging task, owing to their rarity and heterogeneous clinical features, and 2) misdiagnoses are common, and the resulting diagnostic delay is a major factor in their high mortality rate, patient prognosis stands to benefit greatly from the development of a computer-aided diagnostic (CAD) tool for AIBDs. Artificial intelligence (AI) research into rare skin diseases like AIBDs is severely underrepresented, due to a variety of factors, foremost a lack of large-scale, uniformly curated imaging data. A study by Julia S. et al. finds that, as of 2020, there exists no machine learning studies on rare skin diseases [1], despite the demonstrated success of AI in the field of dermatology. Whereas previous research has primarily looked to improve performance through extensive data collection and preprocessing, this approach remains tedious and impractical for rarer, under-documented skin diseases. This study proposes a novel approach in the development of a deep learning based diagnostic aid for AIBDs. Leveraging the visual similarities between our imaging data with pre-existing repositories, we demonstrate automated classification of AIBDs using techniques such as transfer learning and data augmentation over a convolutional neural network (CNN). A three-loop process for training is used, combining feature extraction and fine-tuning to improve performance on our classification task. Our final model retains an accuracy nearly on par with dermatologists' diagnostic accuracy on more common skin cancers. Given the efficacy of our predictive model despite low amounts of training data, this approach holds the potential to benefit clinical diagnoses of AIBDs. Furthermore, our approach can be extrapolated to the diagnosis of other clinically similar rare diseases.


2019 ◽  
Author(s):  
Olle G. Holmberg ◽  
Niklas D. Köhler ◽  
Thiago Martins ◽  
Jakob Siedlecki ◽  
Tina Herold ◽  
...  

AbstractAccess to large, annotated samples represents a considerable challenge for training accurate deep-learning models in medical imaging. While current leading-edge transfer learning from pre-trained models can help with cases lacking data, it limits design choices, and generally results in the use of unnecessarily large models. We propose a novel, self-supervised training scheme for obtaining high-quality, pre-trained networks from unlabeled, cross-modal medical imaging data, which will allow for creating accurate and efficient models. We demonstrate this by accurately predicting optical coherence tomography (OCT)-based retinal thickness measurements from simple infrared (IR) fundus images. Subsequently, learned representations outperformed advanced classifiers on a separate diabetic retinopathy classification task in a scenario of scarce training data. Our cross-modal, three-staged scheme effectively replaced 26,343 diabetic retinopathy annotations with 1,009 semantic segmentations on OCT and reached the same classification accuracy using only 25% of fundus images, without any drawbacks, since OCT is not required for predictions. We expect this concept will also apply to other multimodal clinical data-imaging, health records, and genomics data, and be applicable to corresponding sample-starved learning problems.


Sign in / Sign up

Export Citation Format

Share Document