scholarly journals Generative Adversarial Network for Damage Identification in Civil Structures

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zahra Rastin ◽  
Gholamreza Ghodrati Amiri ◽  
Ehsan Darvishan

In recent years, many efforts have been made to develop efficient deep-learning-based structural health monitoring (SHM) methods. Most of the proposed methods employ supervised algorithms that require data from different damaged states of a structure in order to monitor its health conditions. As such data are not usually available for real civil structures, using supervised algorithms for the health monitoring of these structures might be impracticable. This paper presents a novel two-stage technique based on generative adversarial networks (GANs) for unsupervised SHM and damage identification. In the first stage, a deep convolutional GAN (DCGAN) is used to detect and quantify structural damages; the detected damages are then localized in the second stage using a conditional GAN (CGAN). Raw acceleration signals from a monitored structure are used for this purpose, and the networks are trained by only the intact state data of the structure. The proposed method is validated through applications on the numerical model of a bridge health monitoring (BHM) benchmark structure, an experimental steel structure located at Qatar University, and the full-scale Tianjin Yonghe Bridge.

2017 ◽  
Author(s):  
Benjamin Sanchez-Lengeling ◽  
Carlos Outeiral ◽  
Gabriel L. Guimaraes ◽  
Alan Aspuru-Guzik

Molecular discovery seeks to generate chemical species tailored to very specific needs. In this paper, we present ORGANIC, a framework based on Objective-Reinforced Generative Adversarial Networks (ORGAN), capable of producing a distribution over molecular space that matches with a certain set of desirable metrics. This methodology combines two successful techniques from the machine learning community: a Generative Adversarial Network (GAN), to create non-repetitive sensible molecular species, and Reinforcement Learning (RL), to bias this generative distribution towards certain attributes. We explore several applications, from optimization of random physicochemical properties to candidates for drug discovery and organic photovoltaic material design.


2021 ◽  
Vol 11 (15) ◽  
pp. 7034
Author(s):  
Hee-Deok Yang

Artificial intelligence technologies and vision systems are used in various devices, such as automotive navigation systems, object-tracking systems, and intelligent closed-circuit televisions. In particular, outdoor vision systems have been applied across numerous fields of analysis. Despite their widespread use, current systems work well under good weather conditions. They cannot account for inclement conditions, such as rain, fog, mist, and snow. Images captured under inclement conditions degrade the performance of vision systems. Vision systems need to detect, recognize, and remove noise because of rain, snow, and mist to boost the performance of the algorithms employed in image processing. Several studies have targeted the removal of noise resulting from inclement conditions. We focused on eliminating the effects of raindrops on images captured with outdoor vision systems in which the camera was exposed to rain. An attentive generative adversarial network (ATTGAN) was used to remove raindrops from the images. This network was composed of two parts: an attentive-recurrent network and a contextual autoencoder. The ATTGAN generated an attention map to detect rain droplets. A de-rained image was generated by increasing the number of attentive-recurrent network layers. We increased the number of visual attentive-recurrent network layers in order to prevent gradient sparsity so that the entire generation was more stable against the network without preventing the network from converging. The experimental results confirmed that the extended ATTGAN could effectively remove various types of raindrops from images.


Author(s):  
Lingyu Yan ◽  
Jiarun Fu ◽  
Chunzhi Wang ◽  
Zhiwei Ye ◽  
Hongwei Chen ◽  
...  

AbstractWith the development of image recognition technology, face, body shape, and other factors have been widely used as identification labels, which provide a lot of convenience for our daily life. However, image recognition has much higher requirements for image conditions than traditional identification methods like a password. Therefore, image enhancement plays an important role in the process of image analysis for images with noise, among which the image of low-light is the top priority of our research. In this paper, a low-light image enhancement method based on the enhanced network module optimized Generative Adversarial Networks(GAN) is proposed. The proposed method first applied the enhancement network to input the image into the generator to generate a similar image in the new space, Then constructed a loss function and minimized it to train the discriminator, which is used to compare the image generated by the generator with the real image. We implemented the proposed method on two image datasets (DPED, LOL), and compared it with both the traditional image enhancement method and the deep learning approach. Experiments showed that our proposed network enhanced images have higher PNSR and SSIM, the overall perception of relatively good quality, demonstrating the effectiveness of the method in the aspect of low illumination image enhancement.


2021 ◽  
pp. 147592172110219
Author(s):  
Huachen Jiang ◽  
Chunfeng Wan ◽  
Kang Yang ◽  
Youliang Ding ◽  
Songtao Xue

Wireless sensors are the key components of structural health monitoring systems. During the signal transmission, sensor failure is inevitable, among which, data loss is the most common type. Missing data problem poses a huge challenge to the consequent damage detection and condition assessment, and therefore, great importance should be attached. Conventional missing data imputation basically adopts the correlation-based method, especially for strain monitoring data. However, such methods often require delicate model selection, and the correlations for vehicle-induced strains are much harder to be captured compared with temperature-induced strains. In this article, a novel data-driven generative adversarial network (GAN) for imputing missing strain response is proposed. As opposed to traditional ways where correlations for inter-strains are explicitly modeled, the proposed method directly imputes the missing data considering the spatial–temporal relationships with other strain sensors based on the remaining observed data. Furthermore, the intact and complete dataset is not even necessary during the training process, which shows another great superiority over the model-based imputation method. The proposed method is implemented and verified on a real concrete bridge. In order to demonstrate the applicability and robustness of the GAN, imputation for single and multiple sensors is studied. Results show the proposed method provides an excellent performance of imputation accuracy and efficiency.


Author(s):  
Johannes Haubold ◽  
René Hosch ◽  
Lale Umutlu ◽  
Axel Wetter ◽  
Patrizia Haubold ◽  
...  

Abstract Objectives To reduce the dose of intravenous iodine-based contrast media (ICM) in CT through virtual contrast-enhanced images using generative adversarial networks. Methods Dual-energy CTs in the arterial phase of 85 patients were randomly split into an 80/20 train/test collective. Four different generative adversarial networks (GANs) based on image pairs, which comprised one image with virtually reduced ICM and the original full ICM CT slice, were trained, testing two input formats (2D and 2.5D) and two reduced ICM dose levels (−50% and −80%). The amount of intravenous ICM was reduced by creating virtual non-contrast series using dual-energy and adding the corresponding percentage of the iodine map. The evaluation was based on different scores (L1 loss, SSIM, PSNR, FID), which evaluate the image quality and similarity. Additionally, a visual Turing test (VTT) with three radiologists was used to assess the similarity and pathological consistency. Results The −80% models reach an SSIM of > 98%, PSNR of > 48, L1 of between 7.5 and 8, and an FID of between 1.6 and 1.7. In comparison, the −50% models reach a SSIM of > 99%, PSNR of > 51, L1 of between 6.0 and 6.1, and an FID between 0.8 and 0.95. For the crucial question of pathological consistency, only the 50% ICM reduction networks achieved 100% consistency, which is required for clinical use. Conclusions The required amount of ICM for CT can be reduced by 50% while maintaining image quality and diagnostic accuracy using GANs. Further phantom studies and animal experiments are required to confirm these initial results. Key Points • The amount of contrast media required for CT can be reduced by 50% using generative adversarial networks. • Not only the image quality but especially the pathological consistency must be evaluated to assess safety. • A too pronounced contrast media reduction could influence the pathological consistency in our collective at 80%.


Author(s):  
Huilin Zhou ◽  
Huimin Zheng ◽  
Qiegen Liu ◽  
Jian Liu ◽  
Yuhao Wang

Abstract Electromagnetic inverse-scattering problems (ISPs) are concerned with determining the properties of an unknown object using measured scattered fields. ISPs are often highly nonlinear, causing the problem to be very difficult to address. In addition, the reconstruction images of different optimization methods are distorted which leads to inaccurate reconstruction results. To alleviate these issues, we propose a new linear model solution of generative adversarial network-based (LM-GAN) inspired by generative adversarial networks (GAN). Two sub-networks are trained alternately in the adversarial framework. A linear deep iterative network as a generative network captures the spatial distribution of the data, and a discriminative network estimates the probability of a sample from the training data. Numerical results validate that LM-GAN has admirable fidelity and accuracy when reconstructing complex scatterers.


Author(s):  
Khaled ELKarazle ◽  
Valliappan Raman ◽  
Patrick Then

Age estimation models can be employed in many applications, including soft biometrics, content access control, targeted advertising, and many more. However, as some facial images are taken in unrestrained conditions, the quality relegates, which results in the loss of several essential ageing features. This study investigates how introducing a new layer of data processing based on a super-resolution generative adversarial network (SRGAN) model can influence the accuracy of age estimation by enhancing the quality of both the training and testing samples. Additionally, we introduce a novel convolutional neural network (CNN) classifier to distinguish between several age classes. We train one of our classifiers on a reconstructed version of the original dataset and compare its performance with an identical classifier trained on the original version of the same dataset. Our findings reveal that the classifier which trains on the reconstructed dataset produces better classification accuracy, opening the door for more research into building data-centric machine learning systems.


2020 ◽  
Author(s):  
Fajr Alarsan ◽  
Mamoon Younes

Abstract Generative Adversarial Networks (GANs) are most popular generative frameworks that have achieved compelling performance. They follow an adversarial approach where two deep models generator and discriminator compete with each other In this paper, we propose a Generative Adversarial Network with best hyper-parameters selection to generate fake images for digits number 1 to 9 with generator and train discriminator to decide whereas the generated images are fake or true. Using Genetic Algorithm technique to adapt GAN hyper-parameters, the final method is named GANGA:Generative Adversarial Network with Genetic Algorithm. Anaconda environment with tensorflow library facilitates was used, python as programming language also used with needed libraries. The implementation was done using MNIST dataset to validate our work. The proposed method is to let Genetic algorithm to choose best values of hyper-parameters depending on minimizing a cost function such as a loss function or maximizing accuracy function. GA was used to select values of Learning rate, Batch normalization, Number of neurons and a parameter of Dropout layer.


2021 ◽  
Vol 251 ◽  
pp. 03043
Author(s):  
Fedor Ratnikov ◽  
Alexander Rogachev

Simulation is one of the key components in high energy physics. Historically it relies on the Monte Carlo methods which require a tremendous amount of computation resources. These methods may have difficulties with the expected High Luminosity Large Hadron Collider need, so the experiment is in urgent need of new fast simulation techniques. The application of Generative Adversarial Networks is a promising solution to speed up the simulation while providing the necessary physics performance. In this paper we propose the Self-Attention Generative Adversarial Network as a possible improvement of the network architecture. The application is demonstrated on the performance of generating responses of the LHCb type of the electromagnetic calorimeter.


2021 ◽  
Vol 12 (6) ◽  
pp. 1-20
Author(s):  
Fayaz Ali Dharejo ◽  
Farah Deeba ◽  
Yuanchun Zhou ◽  
Bhagwan Das ◽  
Munsif Ali Jatoi ◽  
...  

Single Image Super-resolution (SISR) produces high-resolution images with fine spatial resolutions from a remotely sensed image with low spatial resolution. Recently, deep learning and generative adversarial networks (GANs) have made breakthroughs for the challenging task of single image super-resolution (SISR) . However, the generated image still suffers from undesirable artifacts such as the absence of texture-feature representation and high-frequency information. We propose a frequency domain-based spatio-temporal remote sensing single image super-resolution technique to reconstruct the HR image combined with generative adversarial networks (GANs) on various frequency bands (TWIST-GAN). We have introduced a new method incorporating Wavelet Transform (WT) characteristics and transferred generative adversarial network. The LR image has been split into various frequency bands by using the WT, whereas the transfer generative adversarial network predicts high-frequency components via a proposed architecture. Finally, the inverse transfer of wavelets produces a reconstructed image with super-resolution. The model is first trained on an external DIV2 K dataset and validated with the UC Merced Landsat remote sensing dataset and Set14 with each image size of 256 × 256. Following that, transferred GANs are used to process spatio-temporal remote sensing images in order to minimize computation cost differences and improve texture information. The findings are compared qualitatively and qualitatively with the current state-of-art approaches. In addition, we saved about 43% of the GPU memory during training and accelerated the execution of our simplified version by eliminating batch normalization layers.


Sign in / Sign up

Export Citation Format

Share Document