scholarly journals Synthetic image data augmentation for fibre layup inspection processes: Techniques to enhance the data set

Author(s):  
Sebastian Meister ◽  
Nantwin Möller ◽  
Jan Stüve ◽  
Roger M. Groves

AbstractIn the aerospace industry, the Automated Fiber Placement process is an established method for producing composite parts. Nowadays the required visual inspection, subsequent to this process, typically takes up to 50% of the total manufacturing time and the inspection quality strongly depends on the inspector. A Deep Learning based classification of manufacturing defects is a possibility to improve the process efficiency and accuracy. However, these techniques require several hundreds or thousands of training data samples. Acquiring this huge amount of data is difficult and time consuming in a real world manufacturing process. Thus, an approach for augmenting a smaller number of defect images for the training of a neural network classifier is presented. Five traditional methods and eight deep learning approaches are theoretically assessed according to the literature. The selected conditional Deep Convolutional Generative Adversarial Network and Geometrical Transformation techniques are investigated in detail, with regard to the diversity and realism of the synthetic images. Between 22 and 166 laser line scan sensor images per defect class from six common fiber placement inspection cases are utilised for tests. The GAN-Train GAN-Test method was applied for the validation. The studies demonstrated that a conditional Deep Convolutional Generative Adversarial Network combined with a previous Geometrical Transformation is well suited to generate a large realistic data set from less than 50 actual input images. The presented network architecture and the associated training weights can serve as a basis for applying the demonstrated approach to other fibre layup inspection images.

2021 ◽  
Vol 2089 (1) ◽  
pp. 012012
Author(s):  
K Nitalaksheswara Rao ◽  
P Jayasree ◽  
Ch.V.Murali Krishna ◽  
K Sai Prasanth ◽  
Ch Satyananda Reddy

Abstract Advancement in deep learning requires significantly huge amount of data for training purpose, where protection of individual data plays a key role in data privacy and publication. Recent developments in deep learning demonstarte a huge challenge for traditionally used approch for image anonymization, such as model inversion attack, where adversary repeatedly query the model, inorder to reconstrut the original image from the anonymized image. In order to apply more protection on image anonymization, an approach is presented here to convert the input (raw) image into a new synthetic image by applying optimized noise to the latent space representation (LSR) of the original image. The synthetic image is anonymized by adding well designed noise calculated over the gradient during the learning process, where the resultant image is both realistic and immune to model inversion attack. More presicely, we extend the approach proposed by T. Kim and J. Yang, 2019 by using Deep Convolutional Generative Adversarial Network (DCGAN) in order to make the approach more efficient. Our aim is to improve the efficiency of the model by changing the loss function to achieve optimal privacy in less time and computation. Finally, the proposed approach is demonstrated using a benchmark dataset. The experimental study presents that the proposed method can efficiently convert the input image into another synthetic image which is of high quality as well as immune to model inversion attack.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 256
Author(s):  
Thierry Pécot ◽  
Alexander Alekseyenko ◽  
Kristin Wallace

Deep learning has revolutionized the automatic processing of images. While deep convolutional neural networks have demonstrated astonishing segmentation results for many biological objects acquired with microscopy, this technology's good performance relies on large training datasets. In this paper, we present a strategy to minimize the amount of time spent in manually annotating images for segmentation. It involves using an efficient and open source annotation tool, the artificial increase of the training data set with data augmentation, the creation of an artificial data set with a conditional generative adversarial network and the combination of semantic and instance segmentations. We evaluate the impact of each of these approaches for the segmentation of nuclei in 2D widefield images of human precancerous polyp biopsies in order to define an optimal strategy.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Ying Fu ◽  
MinXue Gong ◽  
Guang Yang ◽  
JinRong Hu ◽  
Hong Wei ◽  
...  

The generative adversarial network (GAN) has advantage to fit data distribution, so it can achieve data augmentation by fitting the real distribution and synthesizing additional training data. In this way, the deep convolution model can also be well trained in the case of using a small sample medical image data set. However, some certain gaps still exist between synthetic images and real images. In order to further narrow those gaps, this paper proposed a method that applies SimGAN on cardiac magnetic resonance synthetic image optimization task. Meanwhile, the improved residual structure is used to deepen the network structure to improve the performance of the optimizer. Lastly, the experiments will show the good result of our data augmentation method based on GAN.


Author(s):  
Yubo Liu ◽  
Yihua Luo ◽  
Qiaoming Deng ◽  
Xuanxing Zhou

AbstractThis paper aims to explore the idea and method of using deep learning with a small amount sample to realize campus layout generation. From the perspective of the architect, we construct two small amount sample campus layout data sets through artificial screening with the preference of the specific architects. These data sets are used to train the ability of Pix2Pix model to automatically generate the campus layout under the condition of the given campus boundary and surrounding roads. Through the analysis of the experimental results, this paper finds that under the premise of effective screening of the collected samples, even using a small amount sample data set for deep learning can achieve a good result.


Human face recognition is a complex task, and it is important as it can be applied to assist people worldwide, such as those in the medical field or security. For example, human faces can be used for detecting pain or emotion. Nevertheless, a drawback of deep learning methods that need a lot of data to process is key. In this study, a deep-learning-based technique, which is used to classify, that generates a synthetic image of the facial expression and orientation by utilizing the Wasserstein generative adversarial network (WGAN) is presented. The WGAN can improve the performance of the deep learning method. The proposed system certainly generates images with a small number of datasets compared to the large datasets. This research aims to solve the problem of deep learning by increasing the accuracy of the system. The generated output coincides with the real image dataset. The application using ResNet-50 and RetinaNet as a pre-model for the prediction and detection of the human faces revealed a rapid prediction time and accuracy during the assessment test.


2021 ◽  
Vol 23 (4) ◽  
pp. 745-756
Author(s):  
Yi Lyu ◽  
Yijie Jiang ◽  
Qichen Zhang ◽  
Ci Chen

Remaining useful life (RUL) prediction plays a crucial role in decision-making in conditionbased maintenance for preventing catastrophic field failure. For degradation-failed products, the data of performance deterioration process are the key for lifetime estimation. Deep learning has been proved to have excellent performance in RUL prediction given that the degradation data are sufficiently large. However, in some applications, the degradation data are insufficient, under which how to improve the prediction accuracy is yet a challenging problem. To tackle such a challenge, we propose a novel deep learning-based RUL prediction framework by amplifying the degradation dataset. Specifically, we leverage the cycle-consistent generative adversarial network to generate the synthetic data, based on which the original degradation dataset is amplified so that the data characteristics hidden in the sample space could be captured. Moreover, the sliding time window strategy and deep bidirectional long short-term memory network are employed to complete the RUL prediction framework. We show the effectiveness of the proposed method by running it on the turbine engine data set from the National Aeronautics and Space Administration. The comparative experiments show that our method outperforms a case without the use of the synthetically generated data.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


2021 ◽  
Vol 11 (5) ◽  
pp. 2166
Author(s):  
Van Bui ◽  
Tung Lam Pham ◽  
Huy Nguyen ◽  
Yeong Min Jang

In the last decade, predictive maintenance has attracted a lot of attention in industrial factories because of its wide use of the Internet of Things and artificial intelligence algorithms for data management. However, in the early phases where the abnormal and faulty machines rarely appeared in factories, there were limited sets of machine fault samples. With limited fault samples, it is difficult to perform a training process for fault classification due to the imbalance of input data. Therefore, data augmentation was required to increase the accuracy of the learning model. However, there were limited methods to generate and evaluate the data applied for data analysis. In this paper, we introduce a method of using the generative adversarial network as the fault signal augmentation method to enrich the dataset. The enhanced data set could increase the accuracy of the machine fault detection model in the training process. We also performed fault detection using a variety of preprocessing approaches and classified the models to evaluate the similarities between the generated data and authentic data. The generated fault data has high similarity with the original data and it significantly improves the accuracy of the model. The accuracy of fault machine detection reaches 99.41% with 20% original fault machine data set and 93.1% with 0% original fault machine data set (only use generate data only). Based on this, we concluded that the generated data could be used to mix with original data and improve the model performance.


2021 ◽  
Vol 263 (2) ◽  
pp. 4558-4564
Author(s):  
Minghong Zhang ◽  
Xinwei Luo

Underwater acoustic target recognition is an important aspect of underwater acoustic research. In recent years, machine learning has been developed continuously, which is widely and effectively applied in underwater acoustic target recognition. In order to acquire good recognition results and reduce the problem of overfitting, Adequate data sets are essential. However, underwater acoustic samples are relatively rare, which has a certain impact on recognition accuracy. In this paper, in addition of the traditional audio data augmentation method, a new method of data augmentation using generative adversarial network is proposed, which uses generator and discriminator to learn the characteristics of underwater acoustic samples, so as to generate reliable underwater acoustic signals to expand the training data set. The expanded data set is input into the deep neural network, and the transfer learning method is applied to further reduce the impact caused by small samples by fixing part of the pre-trained parameters. The experimental results show that the recognition result of this method is better than the general underwater acoustic recognition method, and the effectiveness of this method is verified.


2021 ◽  
Author(s):  
James Howard ◽  
◽  
Joe Tracey ◽  
Mike Shen ◽  
Shawn Zhang ◽  
...  

Borehole image logs are used to identify the presence and orientation of fractures, both natural and induced, found in reservoir intervals. The contrast in electrical or acoustic properties of the rock matrix and fluid-filled fractures is sufficiently large enough that sub-resolution features can be detected by these image logging tools. The resolution of these image logs is based on the design and operation of the tools, and generally is in the millimeter per pixel range. Hence the quantitative measurement of actual width remains problematic. An artificial intelligence (AI) -based workflow combines the statistical information obtained from a Machine-Learning (ML) segmentation process with a multiple-layer neural network that defines a Deep Learning process that enhances fractures in a borehole image. These new images allow for a more robust analysis of fracture widths, especially those that are sub-resolution. The images from a BHTV log were first segmented into rock and fluid-filled fractures using a ML-segmentation tool that applied multiple image processing filters that captured information to describe patterns in fracture-rock distribution based on nearest-neighbor behavior. The robust ML analysis was trained by users to identify these two components over a short interval in the well, and then the regression model-based coefficients applied to the remaining log. Based on the training, each pixel was assigned a probability value between 1.0 (being a fracture) and 0.0 (pure rock), with most of the pixels assigned one of these two values. Intermediate probabilities represented pixels on the edge of rock-fracture interface or the presence of one or more sub-resolution fractures within the rock. The probability matrix produced a map or image of the distribution of probabilities that determined whether a given pixel in the image was a fracture or partially filled with a fracture. The Deep Learning neural network was based on a Conditional Generative Adversarial Network (cGAN) approach where the probability map was first encoded and combined with a noise vector that acted as a seed for diverse feature generation. This combination was used to generate new images that represented the BHTV response. The second layer of the neural network, the adversarial or discriminator portion, determined whether the generated images were representative of the actual BHTV by comparing the generated images with actual images from the log and producing an output probability of whether it was real or fake. This probability was then used to train the generator and discriminator models that were then applied to the entire log. Several scenarios were run with different probability maps. The enhanced BHTV images brought out fractures observed in the core photos that were less obvious in the original BTHV log through enhanced continuity and improved resolution on fracture widths.


Sign in / Sign up

Export Citation Format

Share Document