scholarly journals GANs-based PIV resolution enhancement without the need of high-resolution input

Author(s):  
Alejandro Güemes ◽  
Carlos Sanmiguel Vila ◽  
Stefano Discetti

A data-driven approach to reconstruct high-resolution flow fields is presented. The method is based on exploiting the recent advances of SRGANs (Super-Resolution Generative Adversarial Networks) to enhance the resolution of Particle Image Velocimetry (PIV). The proposed approach exploits the availability of incomplete projections on high-resolution fields using the same set of images processed by standard PIV. Such incomplete projection is made available by sparse particle-based measurements such as super-resolution particle tracking velocimetry. Consequently, in contrast to other works, the method does not need a dual set of low/high-resolution images, and can be applied directly on a single set of raw images for training and estimation. This data-enhanced particle approach is assessed employing two datasets generated from direct numerical simulations: a fluidic pinball and a turbulent channel flow. The results prove that this data-driven method is able to enhance the resolution of PIV measurements even in complex flows without the need of a separate high-resolution experiment for training.

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4601
Author(s):  
Juan Wen ◽  
Yangjing Shi ◽  
Xiaoshi Zhou ◽  
Yiming Xue

Currently, various agricultural image classification tasks are carried out on high-resolution images. However, in some cases, we cannot get enough high-resolution images for classification, which significantly affects classification performance. In this paper, we design a crop disease classification network based on Enhanced Super-Resolution Generative adversarial networks (ESRGAN) when only an insufficient number of low-resolution target images are available. First, ESRGAN is used to recover super-resolution crop images from low-resolution images. Transfer learning is applied in model training to compensate for the lack of training samples. Then, we test the performance of the generated super-resolution images in crop disease classification task. Extensive experiments show that using the fine-tuned ESRGAN model can recover realistic crop information and improve the accuracy of crop disease classification, compared with the other four image super-resolution methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Linyan Li ◽  
Yu Sun ◽  
Fuyuan Hu ◽  
Tao Zhou ◽  
Xuefeng Xi ◽  
...  

In this paper, we propose an Attentional Concatenation Generative Adversarial Network (ACGAN) aiming at generating 1024 × 1024 high-resolution images. First, we propose a multilevel cascade structure, for text-to-image synthesis. During training progress, we gradually add new layers and, at the same time, use the results and word vectors from the previous layer as inputs to the next layer to generate high-resolution images with photo-realistic details. Second, the deep attentional multimodal similarity model is introduced into the network, and we match word vectors with images in a common semantic space to compute a fine-grained matching loss for training the generator. In this way, we can pay attention to the fine-grained information of the word level in the semantics. Finally, the measure of diversity is added to the discriminator, which enables the generator to obtain more diverse gradient directions and improve the diversity of generated samples. The experimental results show that the inception scores of the proposed model on the CUB and Oxford-102 datasets have reached 4.48 and 4.16, improved by 2.75% and 6.42% compared to Attentional Generative Adversarial Networks (AttenGAN). The ACGAN model has a better effect on text-generated images, and the resulting image is closer to the real image.


2021 ◽  
Author(s):  
Mustaeen Ur Rehman Qazi ◽  
Florian Wellmann

<p>Structural geological models are often calculated on a specific spatial resolution – for example in the form of grid representations, or when surfaces are extracted from implicit fields. However, the structural inventory in these models is limited by the underlying mathematical formulations. It is therefore logical that, above a certain resolution, no additional information is added to the representation.</p><p>We evaluate here if Deep Neural Networks can be trained to obtain a high-resolution representation based on a low-resolution structural model, at different levels of resolution. More specifically, we test the use of state-of-the-art Generative Adversarial Networks (GAN’s) for image superresolution in the context of 2-D geological model sections. These techniques aim to learn the hidden structure or information in high resolution image data set and then reproduce highly detailed and super resolved image from its low resolution counterpart. We propose the use of Generative Adversarial Networks GANS for super resolution of geological images and 2D geological models represented as images. In this work a generative adversarial network called SRGAN has been used which uses a perceptual loss function consisting of an adversarial loss, mean squared error loss and content loss for photo realistic image super resolution. First results are promising, but challenges remain due to the different interpretation of color in images for which these GAN’s are typically used, whereas we are mostly interested in structures.</p>


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 464
Author(s):  
Wei Ma ◽  
Sean Qian

Recent decades have witnessed the breakthrough of autonomous vehicles (AVs), and the sensing capabilities of AVs have been dramatically improved. Various sensors installed on AVs will be collecting massive data and perceiving the surrounding traffic continuously. In fact, a fleet of AVs can serve as floating (or probe) sensors, which can be utilized to infer traffic information while cruising around the roadway networks. Unlike conventional traffic sensing methods relying on fixed location sensors or moving sensors that acquire only the information of their carrying vehicle, this paper leverages data from AVs carrying sensors for not only the information of the AVs, but also the characteristics of the surrounding traffic. A high-resolution data-driven traffic sensing framework is proposed, which estimates the fundamental traffic state characteristics, namely, flow, density and speed in high spatio-temporal resolutions and of each lane on a general road, and it is developed under different levels of AV perception capabilities and for any AV market penetration rate. Experimental results show that the proposed method achieves high accuracy even with a low AV market penetration rate. This study would help policymakers and private sectors (e.g., Waymo) to understand the values of massive data collected by AVs in traffic operation and management.


2021 ◽  
Vol 12 (5) ◽  
pp. 439-448
Author(s):  
Edward Collier ◽  
Supratik Mukhopadhyay ◽  
Kate Duffy ◽  
Sangram Ganguly ◽  
Geri Madanguit ◽  
...  

Author(s):  
Khaled ELKarazle ◽  
Valliappan Raman ◽  
Patrick Then

Age estimation models can be employed in many applications, including soft biometrics, content access control, targeted advertising, and many more. However, as some facial images are taken in unrestrained conditions, the quality relegates, which results in the loss of several essential ageing features. This study investigates how introducing a new layer of data processing based on a super-resolution generative adversarial network (SRGAN) model can influence the accuracy of age estimation by enhancing the quality of both the training and testing samples. Additionally, we introduce a novel convolutional neural network (CNN) classifier to distinguish between several age classes. We train one of our classifiers on a reconstructed version of the original dataset and compare its performance with an identical classifier trained on the original version of the same dataset. Our findings reveal that the classifier which trains on the reconstructed dataset produces better classification accuracy, opening the door for more research into building data-centric machine learning systems.


Sign in / Sign up

Export Citation Format

Share Document