synthetic images
Recently Published Documents


TOTAL DOCUMENTS

265
(FIVE YEARS 128)

H-INDEX

20
(FIVE YEARS 7)

Author(s):  
Stephen M. Zimmerman ◽  
Carl G. Simon Jr. ◽  
Greta Babakhanova

The AbsorbanceQ app converts brightfield microscope images into absorbance images that can be analyzed and compared across different operators, microscopes, and time. Because absorbance-based measurements are comparable across these parameters, they are useful when the aim is to manufacture biotherapeutics with consistent quality. AbsorbanceQ will be of value to those who want to capture quantitative absorbance images of cells. The AbsorbanceQ app has two modes - a single image processing mode and a batch processing mode for multiple images. Instructions for using the app are given on the ‘App Information’ tab when the app is opened. The input and output images for the app have been defined, and synthetic images were used to validate that the output images are correct. This article provides a description of how to use the app, software specifications, a description of how the app works, instructive advice on how to use the tools and a description of the methods used to generate the software. In addition, links are provided to a website where the app and test images are deployed.


2022 ◽  
Vol 63 (1) ◽  
pp. 3
Author(s):  
Reza Abbas Farishta ◽  
Charlene L. Yang ◽  
Reza Farivar

Author(s):  
Yawen Liu ◽  
Haijun Niu ◽  
Pengling Ren ◽  
Jialiang Ren ◽  
Xuan Wei ◽  
...  

Abstract Objective: The generation of quantification maps and weighted images in synthetic MRI techniques is based on complex fitting equations. This process requires longer image generation times. The objective of this study is to evaluate the feasibility of deep learning method for fast reconstruction of synthetic MRI. Approach: A total of 44 healthy subjects were recruited and random divided into a training set (30 subjects) and a testing set (14 subjects). A multiple-dynamic, multiple-echo (MDME) sequence was used to acquire synthetic MRI images. Quantification maps (T1, T2, and proton density (PD) maps) and weighted (T1W, T2W, and T2W FLAIR) images were created with MAGiC software and then used as the ground truth images in the deep learning (DL) model. An improved multichannel U-Net structure network was trained to generate quantification maps and weighted images from raw synthetic MRI imaging data (8 module images). Quantitative evaluation was performed on quantification maps. Quantitative evaluation metrics, as well as qualitative evaluation were used in weighted image evaluation. Nonparametric Wilcoxon signed-rank tests were performed in this study. Main results: The results of quantitative evaluation show that the error between the generated quantification images and the reference images is small. For weighted images, no significant difference in overall image quality or SNR was identified between DL images and synthetic images. Notably, the DL images achieved improved image contrast with T2W images, and fewer artifacts were present on DL images than synthetic images acquired by T2W FLAIR. Significance: The DL algorithm provides a promising method for image generation in synthetic MRI techniques, in which every step of the calculation can be optimized and faster, thereby simplifying the workflow of synthetic MRI techniques.


2021 ◽  
Author(s):  
javad Manashti ◽  
Francois Duhaime ◽  
Matthew Toews ◽  
Pouyan Pirnia

The two objectives of this paper were to demonstrate use the of the discrete element method for generating synthetic images of spherical particle configurations, and to compare the performance of 9 classic feature extraction methods for predicting the particle size distributions (PSD) from these images. The discrete element code YADE was used to generate synthetic images of granular materials to build the dataset. Nine feature extraction methods were compared: Haralick features, Histograms of Oriented Gradients, Entropy, Local Binary Patterns, Local Configuration Pattern, Complete Local Binary Patterns, the Fast Fourier transform, Gabor filters, and Discrete Haar Wavelets. The feature extraction methods were used to generate the inputs of neural networks to predict the PSD. The results show that feature extraction methods can predict the percentage passing with a root-mean-square error (RMSE) on the percentage passing as low as 1.7%. CLBP showed the best result for all particle sizes with a RMSE of 3.8 %. Better RMSE were obtained for the finest sieve (2.1%) compared to coarsest sieve (5.2%).


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 2
Author(s):  
Damiano Perri ◽  
Marco Simonetti ◽  
Osvaldo Gervasi

This paper provides a methodology for the production of synthetic images for training neural networks to recognise shapes and objects. There are many scenarios in which it is difficult, expensive and even dangerous to produce a set of images that is satisfactory for the training of a neural network. The development of 3D modelling software has nowadays reached such a level of realism and ease of use that it seemed natural to explore this innovative path and to give an answer regarding the reliability of this method that bases the training of the neural network on synthetic images. The results obtained in the two proposed use cases, that of the recognition of a pictorial style and that of the recognition of men at sea, lead us to support the validity of the approach, provided that the work is conducted in a very scrupulous and rigorous manner, exploiting the full potential of the modelling software. The code produced, which automatically generates the transformations necessary for the data augmentation of each image, and the generation of random environmental conditions in the case of Blender and Unity3D software, is available under the GPL licence on GitHub. The results obtained lead us to affirm that through the good practices presented in the article, we have defined a simple, reliable, economic and safe method to feed the training phase of a neural network dedicated to the recognition of objects and features to be applied to various contexts.


2021 ◽  
Vol 9 (12) ◽  
pp. 1344
Author(s):  
Franck Schoefs ◽  
Michael O’Byrne ◽  
Vikram Pakrashi ◽  
Bidisha Ghosh ◽  
Mestapha Oumouni ◽  
...  

Hard marine growth is an important process that affects the design and maintenance of floating offshore wind turbines. A key parameter of hard biofouling is roughness since it considerably changes the level of drag forces. Assessment of roughness from on-site inspection is required to improve updating of hydrodynamic forces. Image processing is rapidly developing as a cost effective and easy to implement tool for observing the evolution of biofouling and related hydrodynamic effects over time. Despite such popularity; there is a paucity in literature to address robust features and methods of image processing. There also remains a significant difference between synthetic images of hard biofouling and their idealized laboratory approximations in scaled wave basin testing against those observed in real sites. Consequently; there is a need for such a feature and imaging protocol to be linked to both applications to cater to the lifetime demands of performance of these structures against the hydrodynamic effects of marine growth. This paper proposes the fractal dimension as a robust feature and demonstrates it in the context of a stereoscopic imaging protocol; in terms of lighting and distance to the subject. This is tested for synthetic images; laboratory tests; and real site conditions. Performance robustness is characterized through receiver operating characteristics; while the comparison provides a basis with which a common measure and protocol can be used consistently for a wide range of conditions. The work can be used for design stage as well as for lifetime monitoring and decisions for marine structures, especially in the context of offshore wind turbines.


Author(s):  
Damiano Perri ◽  
Marco Simonetti ◽  
Osvaldo Gervasi

This paper provides a methodology for the production of synthetic images for training neural networks to recognise shapes and objects. There are many scenarios in which it is difficult, expensive and even dangerous to produce a set of images that is satisfactory for the training of a neural network. The development of 3D modelling software has nowadays reached such a level of realism and ease of use that it seemed natural to explore this innovative path and to give an answer regarding the reliability of this method that bases the training of the neural network on synthetic images. The results obtained in the two proposed use cases, that of the recognition of a pictorial style and that of the recognition of migrants at sea, leads us to support the validity of the approach, provided that the work is conducted in a very scrupulous and rigorous manner, exploiting the full potential of the modelling software. The code produced, which automatically generates the transformations necessary for the data augmentation of each image, and the generation of random environmental conditions in the case of Blender and Unity3D software, is available under the GPL licence on GitHub. The results obtained lead us to affirm that through the good practices presented in the article, we have defined a simple, reliable, economic and safe method to feed the training phase of a neural network dedicated to the recognition of objects and features, to be applied to various contexts.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7785
Author(s):  
Jun Mao ◽  
Change Zheng ◽  
Jiyan Yin ◽  
Ye Tian ◽  
Wenbin Cui

Training a deep learning-based classification model for early wildfire smoke images requires a large amount of rich data. However, due to the episodic nature of fire events, it is difficult to obtain wildfire smoke image data, and most of the samples in public datasets suffer from a lack of diversity. To address these issues, a method using synthetic images to train a deep learning classification model for real wildfire smoke was proposed in this paper. Firstly, we constructed a synthetic dataset by simulating a large amount of morphologically rich smoke in 3D modeling software and rendering the virtual smoke against many virtual wildland background images with rich environmental diversity. Secondly, to better use the synthetic data to train a wildfire smoke image classifier, we applied both pixel-level domain adaptation and feature-level domain adaptation. The CycleGAN-based pixel-level domain adaptation method for image translation was employed. On top of this, the feature-level domain adaptation method incorporated ADDA with DeepCORAL was adopted to further reduce the domain shift between the synthetic and real data. The proposed method was evaluated and compared on a test set of real wildfire smoke and achieved an accuracy of 97.39%. The method is applicable to wildfire smoke classification tasks based on RGB single-frame images and would also contribute to training image classification models without sufficient data.


Sign in / Sign up

Export Citation Format

Share Document