scholarly journals Blender as a tool for generating synthetic data

2020 ◽  
Vol 16 ◽  
pp. 227-232
Author(s):  
Rafał Sieczka ◽  
Maciej Pańczyk

Acquiring data for neural network training is an expensive and labour-intensive task, especially when such data isdifficult to access. This article proposes the use of 3D Blender graphics software as a tool to automatically generatesynthetic image data on the example of price labels. Using the fastai library, price label classifiers were trained ona set of synthetic data, which were compared with classifiers trained on a real data set. The comparison of the resultsshowed that it is possible to use Blender to generate synthetic data. This allows for a significant acceleration of thedata acquisition process and consequently, the learning process of neural networks.

Author(s):  
Dmytro Kyrychuk ◽  
Andriy Segin

The paper presents the results of the research on the expediency of training a neural network on images of different clarity and brightness using unevenly distributed lighting on a working area with statically positioned system elements. The use of transfer learning for neural networks to improve the accuracy of object recognition was justified. The object recognition ability of a convolutional neural network while scaling the object relatively to the original was researched. The results of the research on the influence of lighting on the quality of object recognition by a trained network and the influence of background choice for a working area on the quality of object-based feature selection are presented. Based on the results obtained, recommendations for the preparation of individual datasets to improve the quality of training and further object recognition of convolutional neural networks through the elimination of unnecessary variables in images were provided.


2014 ◽  
Vol 10 (S306) ◽  
pp. 279-287 ◽  
Author(s):  
Michael Hobson ◽  
Philip Graff ◽  
Farhan Feroz ◽  
Anthony Lasenby

AbstractMachine-learning methods may be used to perform many tasks required in the analysis of astronomical data, including: data description and interpretation, pattern recognition, prediction, classification, compression, inference and many more. An intuitive and well-established approach to machine learning is the use of artificial neural networks (NNs), which consist of a group of interconnected nodes, each of which processes information that it receives and then passes this product on to other nodes via weighted connections. In particular, I discuss the first public release of the generic neural network training algorithm, calledSkyNet, and demonstrate its application to astronomical problems focusing on its use in the BAMBI package for accelerated Bayesian inference in cosmology, and the identification of gamma-ray bursters. TheSkyNetand BAMBI packages, which are fully parallelised using MPI, are available athttp://www.mrao.cam.ac.uk/software/.


2017 ◽  
Vol 109 (1) ◽  
pp. 29-38 ◽  
Author(s):  
Valentin Deyringer ◽  
Alexander Fraser ◽  
Helmut Schmid ◽  
Tsuyoshi Okita

Abstract Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.


2022 ◽  
pp. 202-226
Author(s):  
Leema N. ◽  
Khanna H. Nehemiah ◽  
Elgin Christo V. R. ◽  
Kannan A.

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.


Geophysics ◽  
1998 ◽  
Vol 63 (6) ◽  
pp. 2035-2041 ◽  
Author(s):  
Zhengping Liu ◽  
Jiaqi Liu

We present a data‐driven method of joint inversion of well‐log and seismic data, based on the power of adaptive mapping of artificial neural networks (ANNs). We use the ANN technique to find and approximate the inversion operator guided by the data set consisting of well data and seismic recordings near the wells. Then we directly map seismic recordings to well parameters, trace by trace, to extrapolate the wide‐band profiles of these parameters using the approximation operator. Compared to traditional inversions, which are based on a few prior theoretical operators, our inversion is novel because (1) it inverts for multiple parameters and (2) it is nonlinear with a high degree of complexity. We first test our algorithm with synthetic data and analyze its sensitivity and robustness. We then invert real data to obtain two extrapolation profiles of sonic log (DT) and shale content (SH), the latter a unique parameter of the inversion and significant for the detailed evaluation of stratigraphic traps. The high‐frequency components of the two profiles are significantly richer than those of the original seismic section.


2021 ◽  
Vol 7 (8) ◽  
pp. 146
Author(s):  
Joshua Ganter ◽  
Simon Löffler ◽  
Ron Metzger ◽  
Katharina Ußling ◽  
Christoph Müller

Collecting real-world data for the training of neural networks is enormously time- consuming and expensive. As such, the concept of virtualizing the domain and creating synthetic data has been analyzed in many instances. This virtualization offers many possibilities of changing the domain, and with that, enabling the relatively fast creation of data. It also offers the chance to enhance necessary augmentations with additional semantic information when compared with conventional augmentation methods. This raises the question of whether such semantic changes, which can be seen as augmentations of the virtual domain, contribute to better results for neural networks, when trained with data augmented this way. In this paper, a virtual dataset is presented, including semantic augmentations and automatically generated annotations, as well as a comparison between semantic and conventional augmentation for image data. It is determined that the results differ only marginally for neural network models trained with the two augmentation approaches.


Author(s):  
Leema N. ◽  
Khanna H. Nehemiah ◽  
Elgin Christo V. R. ◽  
Kannan A.

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.


Author(s):  
Sheng-Uei Guan ◽  
Ji Hua Ang ◽  
Kay Chen Tan ◽  
Abdullah Al Mamun

This chapter proposes a novel method of incremental interference-free neural network training (IIFNNT) for medical datasets, which takes into consideration the interference each attribute has on the others. A specially designed network is used to determine if two attributes interfere with each other, after which the attributes are partitioned using some partitioning algorithms. These algorithms make sure that attributes beneficial to each other are trained in the same batch, thus sharing the same subnetwork while interfering attributes are separated to reduce interference. There are several incremental neural networks available in literature (Guan & Li, 2001; Su, Guan & Yeo, 2001). The architecture of IIFNNT employed some incremental algorithm: the ILIA1 and ILIA2 (incremental learning with respect to new incoming attributes) (Guan & Li, 2001).


2012 ◽  
Vol 500 ◽  
pp. 198-203
Author(s):  
Chang Lin Xiao ◽  
Yan Chen ◽  
Lina Liu ◽  
Ling Tong ◽  
Ming Quan Jia

Genetic Algorithm can further optimize Neural Networks, and this optimized Algorithm has been used in many fields and made better results, but currently, it have not been used in inversion parameters. This paper used backscattering coefficients from ASAR, AIEM model to calculate data as neural network training data and through Genetic Algorithm Neural Networks to retrieve soil moisture. Finally compared with practical test and shows the validity and superiority of the Genetic Algorithm Neural Networks.


2012 ◽  
Vol 263-266 ◽  
pp. 2102-2108 ◽  
Author(s):  
Yana Mazwin Mohmad Hassim ◽  
Rozaida Ghazali

Artificial Neural Networks have emerged as an important tool for classification and have been widely used to classify non-linearly separable pattern. The most popular artificial neural networks model is a Multilayer Perceptron (MLP) that is able to perform classification task with significant success. However due to the complexity of MLP structure and also problems such as local minima trapping, over fitting and weight interference have made neural network training difficult. Thus, the easy way to avoid these problems is by removing the hidden layers. This paper presents the ability of Functional Link Neural Network (FLNN) in overcoming the complexity structure of MLP, using it single layer architecture and proposes an Artificial Bee Colony (ABC) optimization for training the FLNN. The proposed technique is expected to provide better learning scheme for a classifier in order to get more accurate classification result.


Sign in / Sign up

Export Citation Format

Share Document