synthetic image generation
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 14)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol 7 (1) ◽  
pp. 6
Author(s):  
Daniel I. Morís ◽  
Joaquim de Moura ◽  
Jorge Novo ◽  
Marcos Ortega

The global pandemic of COVID-19 raises the importance of having fast and reliable methods to perform an early detection and to visualize the evolution of the disease in every patient, which can be assessed with chest X-ray imaging. Moreover, in order to reduce the risk of cross contamination, radiologists are asked to prioritize the use of portable chest X-ray devices that provide a lower quality and lower level of detail in comparison with the fixed machinery. In this context, computer-aided diagnosis systems are very useful. During the last years, for the case of medical imaging, they are widely developed using deep learning strategies. However, there is a lack of sufficient representative datasets of the COVID-19 affectation, which are critical for supervised learning when training deep models. In this work, we propose a fully automatic method to artificially increase the size of an original portable chest X-ray imaging dataset that was specifically designed for the COVID-19 diagnosis, which can be developed in a non-supervised manner and without requiring paired data. The results demonstrate that the method is able to perform a reliable screening despite all the problems associated with images provided by portable devices, providing an overall accuracy of 92.50%.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6046
Author(s):  
Paweł Zdziebko ◽  
Krzysztof Holak

Computer vision is a frequently used approach in static and dynamic measurements of various mechanical structures. Sometimes, however, conducting a large number of experiments is time-consuming and may require significant financial and human resources. On the contrary, the authors propose a simulation approach for performing experiments to synthetically generate vision data. Synthetic images of mechanical structures subjected to loads are generated in the following way. The finite element method is adopted to compute deformations of the studied structure, and next, the Blender graphics program is used to render images presenting that structure. As a result of the proposed approach, it is possible to obtain synthetic images that reliably reflect static and dynamic experiments. This paper presents the results of the application of the proposed approach in the analysis of a complex-shaped structure for which experimental validation was carried out. In addition, the second example of the process of 3D reconstruction of the examined structure (in a multicamera system) is provided. The results for the structure with damage (cantilever beam) are also presented. The obtained results allow concluding that the proposed approach reliably imitates the images captured during real experiments. In addition, the method can become a tool supporting the vision system configuration process before conducting final experimental research.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5163
Author(s):  
Yun-Hsuan Su ◽  
Wenfan Jiang ◽  
Digesh Chitrakar ◽  
Kevin Huang ◽  
Haonan Peng ◽  
...  

Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly or unavailable. This work examines three novel generative adversarial network (GAN) methods of providing usable synthetic tool images using only surgical background images and a few real tool images. The best of these three novel approaches generates realistic tool textures while preserving local background content by incorporating both a style preservation and a content loss component into the proposed multi-level loss function. The approach is quantitatively evaluated, and results suggest that the synthetically generated training tool images enhance UNet tool segmentation performance. More specifically, with a random set of 100 cadaver and live endoscopic images from the University of Washington Sinus Dataset, the UNet trained with synthetically generated images using the presented method resulted in 35.7% and 30.6% improvement over using purely real images in mean Dice coefficient and Intersection over Union scores, respectively. This study is promising towards the use of more widely available and routine screening endoscopy to preoperatively generate synthetic training tool images for intraoperative UNet tool segmentation.


Author(s):  
Ashley Spindler ◽  
James E Geach ◽  
Michael J Smith

Abstract We present AstroVaDEr, a variational autoencoder designed to perform unsupervised clustering and synthetic image generation using astronomical imaging catalogues. The model is a convolutional neural network that learns to embed images into a low dimensional latent space, and simultaneously optimises a Gaussian Mixture Model (GMM) on the embedded vectors to cluster the training data. By utilising variational inference, we are able to use the learned GMM as a statistical prior on the latent space to facilitate random sampling and generation of synthetic images. We demonstrate AstroVaDEr’s capabilities by training it on gray-scaled gri images from the Sloan Digital Sky Survey, using a sample of galaxies that are classified by Galaxy Zoo 2. An unsupervised clustering model is found which separates galaxies based on learned morphological features such as axis ratio, surface brightness profile, orientation and the presence of companions. We use the learned mixture model to generate synthetic images of galaxies based on the morphological profiles of the Gaussian components. AstroVaDEr succeeds in producing a morphological classification scheme from unlabelled data, but unexpectedly places high importance on the presence of companion objects—demonstrating the importance of human interpretation. The network is scalable and flexible, allowing for larger datasets to be classified, or different kinds of imaging data. We also demonstrate the generative properties of the model, which allow for realistic synthetic images of galaxies to be sampled from the learned classification scheme. These can be used to create synthetic image catalogs or to perform image processing tasks such as deblending.


2020 ◽  
Vol 2020 (8) ◽  
pp. 86-1-86-7
Author(s):  
Ayush Soni ◽  
Alexander Loui ◽  
Scott Brown ◽  
Carl Salvaggio

In this paper, we demonstrate the use of a Conditional Generative Adversarial Networks (cGAN) framework for producing high-fidelity, multispectral aerial imagery using low-fidelity imagery of the same kind as input. The motivation behind is that it is easier, faster, and often less costly to produce low-fidelity images than high-fidelity images using the various available techniques, such as physics-driven synthetic image generation models. Once the cGAN network is trained and tuned in a supervised manner on a data set of paired low- and high-quality aerial images, it can then be used to enhance new, lower-quality baseline images of similar type to produce more realistic, high-fidelity multispectral image data. This approach can potentially save significant time and effort compared to traditional approaches of producing multispectral images.


Sign in / Sign up

Export Citation Format

Share Document