scholarly journals IMAGE-TO-IMAGE TRANSLATION FOR ENHANCED FEATURE MATCHING, IMAGE RETRIEVAL AND VISUAL LOCALIZATION

Author(s):  
M. S. Mueller ◽  
T. Sattler ◽  
M. Pollefeys ◽  
B. Jutzi

<p><strong>Abstract.</strong> The performance of machine learning and deep learning algorithms for image analysis depends significantly on the quantity and quality of the training data. The generation of annotated training data is often costly, time-consuming and laborious. Data augmentation is a powerful option to overcome these drawbacks. Therefore, we augment training data by rendering images with arbitrary poses from 3D models to increase the quantity of training images. These training images usually show artifacts and are of limited use for advanced image analysis. Therefore, we propose to use image-to-image translation to transform images from a <i>rendered</i> domain to a <i>captured</i> domain. We show that translated images in the <i>captured</i> domain are of higher quality than the rendered images. Moreover, we demonstrate that image-to-image translation based on rendered 3D models enhances the performance of common computer vision tasks, namely feature matching, image retrieval and visual localization. The experimental results clearly show the enhancement on translated images over rendered images for all investigated tasks. In addition to this, we present the advantages utilizing translated images over exclusively captured images for visual localization.</p>

Author(s):  
M. S. Mueller ◽  
A. Metzger ◽  
B. Jutzi

<p><strong>Abstract.</strong> Image-based localization or camera re-localization is a fundamental task in computer vision and mandatory in the fields of navigation for robotics and autonomous driving or for virtual and augmented reality. Such image pose regression in 6 Degrees of Freedom (DoF) is recently solved by Convolutional Neural Networks (CNNs). However, already well-established methods based on feature matching still score higher accuracies so far. Therefore, we want to investigate how data augmentation could further improve CNN-based pose regression. Data augmentation is a valuable technique to boost performance on training based methods and wide spread in the computer vision community. Our aim in this paper is to show the benefit of data augmentation for pose regression by CNNs. For this purpose images are rendered from a 3D model of the actual test environment. This model again is generated by the original training data set, whereas no additional information nor data is required. Furthermore we introduce different training sets composed of rendered and real images. It is shown that the enhanced training of CNNs by utilizing 3D models of the environment improves the image localization accuracy. The accuracy of pose regression could be improved up to 69.37<span class="thinspace"></span>% for the position component and 61.61<span class="thinspace"></span>% for the rotation component on our investigated data set.</p>


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5479 ◽  
Author(s):  
Maryam Rahnemoonfar ◽  
Jimmy Johnson ◽  
John Paden

Significant resources have been spent in collecting and storing large and heterogeneous radar datasets during expensive Arctic and Antarctic fieldwork. The vast majority of data available is unlabeled, and the labeling process is both time-consuming and expensive. One possible alternative to the labeling process is the use of synthetically generated data with artificial intelligence. Instead of labeling real images, we can generate synthetic data based on arbitrary labels. In this way, training data can be quickly augmented with additional images. In this research, we evaluated the performance of synthetically generated radar images based on modified cycle-consistent adversarial networks. We conducted several experiments to test the quality of the generated radar imagery. We also tested the quality of a state-of-the-art contour detection algorithm on synthetic data and different combinations of real and synthetic data. Our experiments show that synthetic radar images generated by generative adversarial network (GAN) can be used in combination with real images for data augmentation and training of deep neural networks. However, the synthetic images generated by GANs cannot be used solely for training a neural network (training on synthetic and testing on real) as they cannot simulate all of the radar characteristics such as noise or Doppler effects. To the best of our knowledge, this is the first work in creating radar sounder imagery based on generative adversarial network.


2018 ◽  
Author(s):  
Naihui Zhou ◽  
Zachary D Siegel ◽  
Scott Zarecor ◽  
Nigel Lee ◽  
Darwin A Campbell ◽  
...  

AbstractThe accuracy of machine learning tasks critically depends on high quality ground truth data. Therefore, in many cases, producing good ground truth data typically involves trained professionals; however, this can be costly in time, effort, and money. Here we explore the use of crowdsourcing to generate a large number of training data of good quality. We explore an image analysis task involving the segmentation of corn tassels from images taken in a field setting. We investigate the accuracy, speed and other quality metrics when this task is performed by students for academic credit, Amazon MTurk workers, and Master Amazon MTurk workers. We conclude that the Amazon MTurk and Master Mturk workers perform significantly better than the for-credit students, but with no significant difference between the two MTurk worker types. Furthermore, the quality of the segmentation produced by Amazon MTurk workers rivals that of an expert worker. We provide best practices to assess the quality of ground truth data, and to compare data quality produced by different sources. We conclude that properly managed crowdsourcing can be used to establish large volumes of viable ground truth data at a low cost and high quality, especially in the context of high throughput plant phenotyping. We also provide several metrics for assessing the quality of the generated datasets.Author SummaryFood security is a growing global concern. Farmers, plant breeders, and geneticists are hastening to address the challenges presented to agriculture by climate change, dwindling arable land, and population growth. Scientists in the field of plant phenomics are using satellite and drone images to understand how crops respond to a changing environment and to combine genetics and environmental measures to maximize crop growth efficiency. However, the terabytes of image data require new computational methods to extract useful information. Machine learning algorithms are effective in recognizing select parts of images, butthey require high quality data curated by people to train them, a process that can be laborious and costly. We examined how well crowdsourcing works in providing training data for plant phenomics, specifically, segmenting a corn tassel – the male flower of the corn plant – from the often-cluttered images of a cornfield. We provided images to students, and to Amazon MTurkers, the latter being an on-demand workforce brokered by Amazon.com and paid on a task-by-task basis. We report on best practices in crowdsourcing image labeling for phenomics, and compare the different groups on measures such as fatigue and accuracy over time. We find that crowdsourcing is a good way of generating quality labeled data, rivaling that of experts.


2021 ◽  
Vol 11 (12) ◽  
pp. 5586
Author(s):  
Eunkyeong Kim ◽  
Jinyong Kim ◽  
Hansoo Lee ◽  
Sungshin Kim

Artificial intelligence technologies and robot vision systems are core technologies in smart factories. Currently, there is scholarly interest in automatic data feature extraction in smart factories using deep learning networks. However, sufficient training data are required to train these networks. In addition, barely perceptible noise can affect classification accuracy. Therefore, to increase the amount of training data and achieve robustness against noise attacks, a data augmentation method implemented using the adaptive inverse peak signal-to-noise ratio was developed in this study to consider the influence of the color characteristics of the training images. This method was used to automatically determine the optimal perturbation range of the color perturbation method for generating images using weights based on the characteristics of the training images. The experimental results showed that the proposed method could generate new training images from original images, classify noisy images with greater accuracy, and generally improve the classification accuracy. This demonstrates that the proposed method is effective and robust to noise, even when the training data are deficient.


Author(s):  
Qingsong Wen ◽  
Liang Sun ◽  
Fan Yang ◽  
Xiaomin Song ◽  
Jingkun Gao ◽  
...  

Deep learning performs remarkably well on many time series analysis tasks recently. The superior performance of deep neural networks relies heavily on a large number of training data to avoid overfitting. However, the labeled data of many real-world time series applications may be limited such as classification in medical time series and anomaly detection in AIOps. As an effective way to enhance the size and quality of the training data, data augmentation is crucial to the successful application of deep learning models on time series data. In this paper, we systematically review different data augmentation methods for time series. We propose a taxonomy for the reviewed methods, and then provide a structured review for these methods by highlighting their strengths and limitations. We also empirically compare different data augmentation methods for different tasks including time series classification, anomaly detection, and forecasting. Finally, we discuss and highlight five future directions to provide useful research guidance.


2021 ◽  
Vol 11 (4) ◽  
pp. 1798
Author(s):  
Jun Yang ◽  
Huijuan Yu ◽  
Tao Shen ◽  
Yaolian Song ◽  
Zhuangfei Chen

As the capability of an electroencephalogram’s (EEG) measurement of the real-time electrodynamics of the human brain is known to all, signal processing techniques, particularly deep learning, could either provide a novel solution for learning but also optimize robust representations from EEG signals. Considering the limited data collection and inadequate concentration of during subjects testing, it becomes essential to obtain sufficient training data and useful features with a potential end-user of a brain–computer interface (BCI) system. In this paper, we combined a conditional variational auto-encoder network (CVAE) with a generative adversarial network (GAN) for learning latent representations from EEG brain signals. By updating the fine-tuned parameter fed into the resulting generative model, we could synthetize the EEG signal under a specific category. We employed an encoder network to obtain the distributed samples of the EEG signal, and applied an adversarial learning mechanism to continuous optimization of the parameters of the generator, discriminator and classifier. The CVAE was adopted to adjust the synthetics more approximately to the real sample class. Finally, we demonstrated our approach take advantages of both statistic and feature matching to make the training process converge faster and more stable and address the problem of small-scale datasets in deep learning applications for motor imagery tasks through data augmentation. The augmented training datasets produced by our proposed CVAE-GAN method significantly enhance the performance of MI-EEG recognition.


Author(s):  
Guanhua Chen ◽  
Yun Chen ◽  
Yong Wang ◽  
Victor O.K. Li

Leveraging lexical constraint is extremely significant in domain-specific machine translation and interactive machine translation. Previous studies mainly focus on extending beam search algorithm or augmenting the training corpus by replacing source phrases with the corresponding target translation. These methods either suffer from the heavy computation cost during inference or depend on the quality of the bilingual dictionary pre-specified by user or constructed with statistical machine translation. In response to these problems, we present a conceptually simple and empirically effective data augmentation approach in lexical constrained neural machine translation. Specifically, we make constraint-aware training data by first randomly sampling the phrases of the reference as constraints, and then packing them together into the source sentence with a separation symbol. Extensive experiments on several language pairs demonstrate that our approach achieves superior translation results over the existing systems, improving translation of constrained sentences without hurting the unconstrained ones.


2021 ◽  
pp. 0734242X2098788
Author(s):  
Yifeng Li ◽  
Xunpeng Qin ◽  
Zhenyuan Zhang ◽  
Huanyu Dong

End-of-life vehicles (ELVs) provide a particularly potent source of supply for metals. Hence, the recycling and sorting techniques for ferrous and nonferrous metal scraps from ELVs significantly increase metal resource utilization. However, different kinds of nonferrous metal scraps, such as aluminium (Al) and copper (Cu), are not further automatically classified due to the lack of proper techniques. The purpose of this study is to propose an identification method for different nonferrous metal scraps, facilitate the further separation of nonferrous metal scraps, achieve better management of recycled metal resources and increase sustainability. A convolutional neural network (CNN) and SEEDS (superpixels extracted via energy-driven sampling) were adopted in this study. To build the classifier, 80 training images of randomly chosen Al and Cu scraps were taken, and some practical methods were proposed, including training patch generation with SEEDS, image data augmentation and automatic labelling methods for enormous training data. To obtain more accurate results, SEEDS was also used to optimize the coarse results obtained from the pretrained CNN model. Five indicators were adopted to evaluate the final identification results. Furthermore, 15 test samples concerning different classification environments were tested through the proposed model, and it performed well under all of the employed evaluation indexes, with an average precision of 0.98. The results demonstrate that the proposed model is robust for metal scrap identification, which can be expanded to a complex industrial environment, and it presents new possibilities for highly accurate automatic nonferrous metal scrap classification.


2012 ◽  
Author(s):  
Jukka Rantanen ◽  
Hjalte Trnka ◽  
Jian Wu ◽  
Marco van de Weert ◽  
Holger Grohganz

2020 ◽  
Vol 2020 (10) ◽  
pp. 310-1-310-7
Author(s):  
Khalid Omer ◽  
Luca Caucci ◽  
Meredith Kupinski

This work reports on convolutional neural network (CNN) performance on an image texture classification task as a function of linear image processing and number of training images. Detection performance of single and multi-layer CNNs (sCNN/mCNN) are compared to optimal observers. Performance is quantified by the area under the receiver operating characteristic (ROC) curve, also known as the AUC. For perfect detection AUC = 1.0 and AUC = 0.5 for guessing. The Ideal Observer (IO) maximizes AUC but is prohibitive in practice because it depends on high-dimensional image likelihoods. The IO performance is invariant to any fullrank, invertible linear image processing. This work demonstrates the existence of full-rank, invertible linear transforms that can degrade both sCNN and mCNN even in the limit of large quantities of training data. A subsequent invertible linear transform changes the images’ correlation structure again and can improve this AUC. Stationary textures sampled from zero mean and unequal covariance Gaussian distributions allow closed-form analytic expressions for the IO and optimal linear compression. Linear compression is a mitigation technique for high-dimension low sample size (HDLSS) applications. By definition, compression strictly decreases or maintains IO detection performance. For small quantities of training data, linear image compression prior to the sCNN architecture can increase AUC from 0.56 to 0.93. Results indicate an optimal compression ratio for CNN based on task difficulty, compression method, and number of training images.


Sign in / Sign up

Export Citation Format

Share Document