scholarly journals Synthetic Image Rendering Solves Annotation Problem in Deep Learning Nanoparticle Segmentation

Small Methods ◽  
2021 ◽  
pp. 2100223
Author(s):  
Leonid Mill ◽  
David Wolff ◽  
Nele Gerrits ◽  
Patrick Philipp ◽  
Lasse Kling ◽  
...  
Small Methods ◽  
2021 ◽  
Vol 5 (7) ◽  
pp. 2170028
Author(s):  
Leonid Mill ◽  
David Wolff ◽  
Nele Gerrits ◽  
Patrick Philipp ◽  
Lasse Kling ◽  
...  

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Malte Seemann ◽  
Lennart Bargsten ◽  
Alexander Schlaefer

AbstractDeep learning methods produce promising results when applied to a wide range of medical imaging tasks, including segmentation of artery lumen in computed tomography angiography (CTA) data. However, to perform sufficiently, neural networks have to be trained on large amounts of high quality annotated data. In the realm of medical imaging, annotations are not only quite scarce but also often not entirely reliable. To tackle both challenges, we developed a two-step approach for generating realistic synthetic CTA data for the purpose of data augmentation. In the first step moderately realistic images are generated in a purely numerical fashion. In the second step these images are improved by applying neural domain adaptation. We evaluated the impact of synthetic data on lumen segmentation via convolutional neural networks (CNNs) by comparing resulting performances. Improvements of up to 5% in terms of Dice coefficient and 20% for Hausdorff distance represent a proof of concept that the proposed augmentation procedure can be used to enhance deep learning-based segmentation for artery lumen in CTA images.


Author(s):  
Sebastian Meister ◽  
Nantwin Möller ◽  
Jan Stüve ◽  
Roger M. Groves

AbstractIn the aerospace industry, the Automated Fiber Placement process is an established method for producing composite parts. Nowadays the required visual inspection, subsequent to this process, typically takes up to 50% of the total manufacturing time and the inspection quality strongly depends on the inspector. A Deep Learning based classification of manufacturing defects is a possibility to improve the process efficiency and accuracy. However, these techniques require several hundreds or thousands of training data samples. Acquiring this huge amount of data is difficult and time consuming in a real world manufacturing process. Thus, an approach for augmenting a smaller number of defect images for the training of a neural network classifier is presented. Five traditional methods and eight deep learning approaches are theoretically assessed according to the literature. The selected conditional Deep Convolutional Generative Adversarial Network and Geometrical Transformation techniques are investigated in detail, with regard to the diversity and realism of the synthetic images. Between 22 and 166 laser line scan sensor images per defect class from six common fiber placement inspection cases are utilised for tests. The GAN-Train GAN-Test method was applied for the validation. The studies demonstrated that a conditional Deep Convolutional Generative Adversarial Network combined with a previous Geometrical Transformation is well suited to generate a large realistic data set from less than 50 actual input images. The presented network architecture and the associated training weights can serve as a basis for applying the demonstrated approach to other fibre layup inspection images.


2019 ◽  
Vol 13 (7) ◽  
pp. 1097-1105 ◽  
Author(s):  
Ye Wang ◽  
Weiwen Deng ◽  
Zhenyi Liu ◽  
Jinsong Wang

2021 ◽  
Vol 2089 (1) ◽  
pp. 012012
Author(s):  
K Nitalaksheswara Rao ◽  
P Jayasree ◽  
Ch.V.Murali Krishna ◽  
K Sai Prasanth ◽  
Ch Satyananda Reddy

Abstract Advancement in deep learning requires significantly huge amount of data for training purpose, where protection of individual data plays a key role in data privacy and publication. Recent developments in deep learning demonstarte a huge challenge for traditionally used approch for image anonymization, such as model inversion attack, where adversary repeatedly query the model, inorder to reconstrut the original image from the anonymized image. In order to apply more protection on image anonymization, an approach is presented here to convert the input (raw) image into a new synthetic image by applying optimized noise to the latent space representation (LSR) of the original image. The synthetic image is anonymized by adding well designed noise calculated over the gradient during the learning process, where the resultant image is both realistic and immune to model inversion attack. More presicely, we extend the approach proposed by T. Kim and J. Yang, 2019 by using Deep Convolutional Generative Adversarial Network (DCGAN) in order to make the approach more efficient. Our aim is to improve the efficiency of the model by changing the loss function to achieve optimal privacy in less time and computation. Finally, the proposed approach is demonstrated using a benchmark dataset. The experimental study presents that the proposed method can efficiently convert the input image into another synthetic image which is of high quality as well as immune to model inversion attack.


2022 ◽  
Vol 14 (2) ◽  
pp. 246
Author(s):  
Noel Ivan Ulloa ◽  
Sang-Ho Yun ◽  
Shou-Hao Chiang ◽  
Ryoichi Furuta

The synthetic aperture radar (SAR) imagery has been widely applied for flooding mapping based on change detection approaches. However, errors in the mapping result are expected since not all land-cover changes are flood-induced, and those changes are sensitive to SAR data, such as crop growth or harvest over agricultural lands, clearance of forested areas, and/or modifications on the urban landscape. This study, therefore, incorporated historical SAR images to boost the detection of flood-induced changes during extreme weather events, using the Long Short-Term Memory (LSTM) method. Additionally, to incorporate the spatial signatures for the change detection, we applied a deep learning-based spatiotemporal simulation framework, Convolutional Long Short-Term Memory (ConvLSTM), for simulating a synthetic image using Sentinel One intensity time series. This synthetic image will be prepared in advance of flood events, and then it can be used to detect flood areas using change detection when the post-image is available. Practically, significant divergence between the synthetic image and post-image is expected over inundated zones, which can be mapped by applying thresholds to the Delta image (synthetic image minus post-image). We trained and tested our model on three events from Australia, Brazil, and Mozambique. The generated Flood Proxy Maps were compared against reference data derived from Sentinel Two and Planet Labs optical data. To corroborate the effectiveness of the proposed methods, we also generated Delta products for two baseline models (closest post-image minus pre-image and historical mean minus post-image) and two LSTM architectures: normal LSTM and ConvLSTM. Results show that thresholding of ConvLSTM Delta yielded the highest Cohen’s Kappa coefficients in all study cases: 0.92 for Australia, 0.78 for Mozambique, and 0.68 for Brazil. Lower Kappa values obtained in the Mozambique case can be subject to the topographic effect on SAR imagery. These results still confirm the benefits in terms of classification accuracy that convolutional operations provide in time series analysis of satellite data employing spatially correlated information in a deep learning framework.


Sign in / Sign up

Export Citation Format

Share Document