scholarly journals 3D fluorescence microscopy data synthesis for segmentation and benchmarking

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260509
Author(s):  
Dennis Eschweiler ◽  
Malte Rethwisch ◽  
Mareike Jarchow ◽  
Simon Koppers ◽  
Johannes Stegmaier

Automated image processing approaches are indispensable for many biomedical experiments and help to cope with the increasing amount of microscopy image data in a fast and reproducible way. Especially state-of-the-art deep learning-based approaches most often require large amounts of annotated training data to produce accurate and generalist outputs, but they are often compromised by the general lack of those annotated data sets. In this work, we propose how conditional generative adversarial networks can be utilized to generate realistic image data for 3D fluorescence microscopy from annotation masks of 3D cellular structures. In combination with mask simulation approaches, we demonstrate the generation of fully-annotated 3D microscopy data sets that we make publicly available for training or benchmarking. An additional positional conditioning of the cellular structures enables the reconstruction of position-dependent intensity characteristics and allows to generate image data of different quality levels. A patch-wise working principle and a subsequent full-size reassemble strategy is used to generate image data of arbitrary size and different organisms. We present this as a proof-of-concept for the automated generation of fully-annotated training data sets requiring only a minimum of manual interaction to alleviate the need of manual annotations.

2021 ◽  
Author(s):  
Lena-Marie Woelk ◽  
Sukanya A. Kannabiran ◽  
Valerie Brock ◽  
Christine E. Gee ◽  
Christian Lohr ◽  
...  

Live cell Ca2+ fluorescence microscopy is a cornerstone of cellular signaling analysis and imaging. The demand for high spatial and temporal imaging resolution is, however, intrinsically linked to a low signal-to-noise ratio (SNR) of the acquired spatio-temporal image data, which impedes subsequent image analysis. Advanced deconvolution and image restoration algorithms can partly mitigate the corresponding problems, but are usually defined only for static images. Frame-by-frame application to spatio-temporal image data neglects inter-frame contextual relationships and temporal consistency of the imaged biological processes. Here, we propose a variational approach to time-dependent image restoration built on entropy-based regularization specifically suited to process low- and lowest-SNR fluorescence microscopy data. The advantage of the presented approach is demonstrated by means of four data sets: synthetic data for in-depth evaluation of the algorithm behavior; two data sets acquired for analysis of initial Ca2+ microdomains in T cells; and, to illustrate transferability of the methodical concept to different applications, one dataset depicting spontaneous Ca2+ signaling in jGCaMP7b-expressing astrocytes. To foster re-use and reproducibility, the source code is made publicly available.


2019 ◽  
Vol 2019 (4) ◽  
pp. 232-249 ◽  
Author(s):  
Benjamin Hilprecht ◽  
Martin Härterich ◽  
Daniel Bernau

Abstract We present two information leakage attacks that outperform previous work on membership inference against generative models. The first attack allows membership inference without assumptions on the type of the generative model. Contrary to previous evaluation metrics for generative models, like Kernel Density Estimation, it only considers samples of the model which are close to training data records. The second attack specifically targets Variational Autoencoders, achieving high membership inference accuracy. Furthermore, previous work mostly considers membership inference adversaries who perform single record membership inference. We argue for considering regulatory actors who perform set membership inference to identify the use of specific datasets for training. The attacks are evaluated on two generative model architectures, Generative Adversarial Networks (GANs) and Variational Autoen-coders (VAEs), trained on standard image datasets. Our results show that the two attacks yield success rates superior to previous work on most data sets while at the same time having only very mild assumptions. We envision the two attacks in combination with the membership inference attack type formalization as especially useful. For example, to enforce data privacy standards and automatically assessing model quality in machine learning as a service setups. In practice, our work motivates the use of GANs since they prove less vulnerable against information leakage attacks while producing detailed samples.


2009 ◽  
Vol 4 (9) ◽  
pp. 1305-1311 ◽  
Author(s):  
Frederick Klauschen ◽  
Masaru Ishii ◽  
Hai Qi ◽  
Marc Bajénoff ◽  
Jackson G Egen ◽  
...  

2021 ◽  
Vol 7 (2) ◽  
pp. 755-758
Author(s):  
Daniel Wulff ◽  
Mohamad Mehdi ◽  
Floris Ernst ◽  
Jannis Hagenah

Abstract Data augmentation is a common method to make deep learning assessible on limited data sets. However, classical image augmentation methods result in highly unrealistic images on ultrasound data. Another approach is to utilize learning-based augmentation methods, e.g. based on variational autoencoders or generative adversarial networks. However, a large amount of data is necessary to train these models, which is typically not available in scenarios where data augmentation is needed. One solution for this problem could be a transfer of augmentation models between different medical imaging data sets. In this work, we present a qualitative study of the cross data set generalization performance of different learning-based augmentation methods for ultrasound image data. We could show that knowledge transfer is possible in ultrasound image augmentation and that the augmentation partially results in semantically meaningful transfers of structures, e.g. vessels, across domains.


2021 ◽  
Vol 22 (21) ◽  
pp. 11792
Author(s):  
Lena-Marie Woelk ◽  
Sukanya A. Kannabiran  ◽  
Valerie J. Brock  ◽  
Christine E. Gee  ◽  
Christian Lohr  ◽  
...  

Live-cell Ca2+ fluorescence microscopy is a cornerstone of cellular signaling analysis and imaging. The demand for high spatial and temporal imaging resolution is, however, intrinsically linked to a low signal-to-noise ratio (SNR) of the acquired spatio-temporal image data, which impedes on the subsequent image analysis. Advanced deconvolution and image restoration algorithms can partly mitigate the corresponding problems but are usually defined only for static images. Frame-by-frame application to spatio-temporal image data neglects inter-frame contextual relationships and temporal consistency of the imaged biological processes. Here, we propose a variational approach to time-dependent image restoration built on entropy-based regularization specifically suited to process low- and lowest-SNR fluorescence microscopy data. The advantage of the presented approach is demonstrated by means of four datasets: synthetic data for in-depth evaluation of the algorithm behavior; two datasets acquired for analysis of initial Ca2+ microdomains in T-cells; finally, to illustrate the transferability of the methodical concept to different applications, one dataset depicting spontaneous Ca2+ signaling in jGCaMP7b-expressing astrocytes. To foster re-use and reproducibility, the source code is made publicly available.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Makoto Naruse ◽  
Takashi Matsubara ◽  
Nicolas Chauvet ◽  
Kazutaka Kanno ◽  
Tianyu Yang ◽  
...  

Abstract Generative adversarial networks (GANs) are becoming increasingly important in the artificial construction of natural images and related functionalities, wherein two types of networks called generators and discriminators evolve through adversarial mechanisms. Using deep convolutional neural networks and related techniques, high-resolution and highly realistic scenes, human faces, etc. have been generated. GANs generally require large amounts of genuine training data sets, as well as vast amounts of pseudorandom numbers. In this study, we utilized chaotic time series generated experimentally by semiconductor lasers for the latent variables of a GAN, whereby the inherent nature of chaos could be reflected or transformed into the generated output data. We show that the similarity in proximity, which describes the robustness of the generated images with respect to minute changes in the input latent variables, is enhanced, while the versatility overall is not severely degraded. Furthermore, we demonstrate that the surrogate chaos time series eliminates the signature of the generated images that is originally observed corresponding to the negative autocorrelation inherent in the chaos sequence. We also address the effects of utilizing chaotic time series to retrieve images from the trained generator.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA173-WA183 ◽  
Author(s):  
Harpreet Kaur ◽  
Nam Pham ◽  
Sergey Fomel

We have estimated migrated images with meaningful amplitudes matching least-squares migrated images by approximating the inverse Hessian using generative adversarial networks (GANs) in a conditional setting. We use the CycleGAN framework and extend it to the conditional CycleGAN such that the mapping from the migrated image to the true reflectivity is subjected to a velocity attribute condition. This algorithm is applied after migration and is computationally efficient. It produces results comparable to iterative inversion but at a significantly reduced cost. In numerical experiments with synthetic and field data sets, the adopted method improves image resolution, attenuates noise, reduces migration artifacts, and enhances reflection amplitudes. We train the network with three different data sets and test on three other data sets, which are not a part of training. Tests on validation data sets verify the effectiveness of the approach. In addition, the field-data example also highlights the effect of the bandwidth of the training data and the quality of the velocity model on the quality of the deep neural network output.


Author(s):  
Judy Simon

Computer vision, also known as computational visual perception, is a branch of artificial intelligence that allows computers to interpret digital pictures and videos in a manner comparable to biological vision. It entails the development of techniques for simulating biological vision. The aim of computer vision is to extract more meaningful information from visual input than that of a biological vision. Computer vision is exploding due to the avalanche of data being produced today. Powerful generative models, such as Generative Adversarial Networks (GANs), are responsible for significant advances in the field of picture creation. The focus of this research is to concentrate on textual content descriptors in the images used by GANs to generate synthetic data from the MNIST dataset to either supplement or replace the original data while training classifiers. This can provide better performance than other traditional image enlarging procedures due to the good handling of synthetic data. It shows that training classifiers on synthetic data are as effective as training them on pure data alone, and it also reveals that, for small training data sets, supplementing the dataset by first training GANs on the data may lead to a significant increase in classifier performance.


2018 ◽  
Vol 29 (11) ◽  
pp. 1274-1280 ◽  
Author(s):  
Assaf Zaritsky

The rapid growth in content and complexity of cell image data creates an opportunity for synergy between experimental and computational scientists. Sharing microscopy data enables computational scientists to develop algorithms and tools for data analysis, integration, and mining. These tools can be applied by experimentalists to promote hypothesis-generation and discovery. We are now at the dawn of this revolution: infrastructure is being developed for data standardization, deposition, sharing, and analysis; some journals and funding agencies mandate data deposition; data journals publish high-content microscopy data sets; quantification becomes standard in scientific publications; new analytic tools are being developed and dispatched to the community; and huge data sets are being generated by individual labs and philanthropic initiatives. In this Perspective, I reflect on sharing and reusing cell image data and the opportunities that will come along with it.


Author(s):  
Brian Stucky ◽  
Laura Brenskelle ◽  
Robert Guralnick

Recent progress in using deep learning techniques to automate the analysis of complex image data is opening up exciting new avenues for research in biodiversity science. However, potential applications of machine learning methods in biodiversity research are often limited by the relative scarcity of data suitable for training machine learning models. Development of high-quality training data sets can be a surprisingly challenging task that can easily consume hundreds of person-hours of time. In this talk, we present the results of our recent work implementing and comparing several different methods for generating annotated, biodiversity-oriented image data for training machine learning models, including collaborative expert scoring, local volunteer image annotators with on-site training, and distributed, remote image annotation via citizen science platforms. We discuss error rates, among-annotator variance, and depth of coverage required to ensure highly reliable image annotations. We also discuss time considerations and efficiency of the various methods. Finally, we present new software, called ImageAnt (currently under development), that supports efficient, highly flexible image annotation workflows. ImageAnt was created primarily in response to the challenges we discovered in our own efforts to generate image-based training data for machine learning models. ImageAnt features a simple user interface and can be used to implement sophisticated, adaptive scripting of image annotation tasks.


Sign in / Sign up

Export Citation Format

Share Document