scholarly journals Anomaly detection in Hyper Suprime-Cam galaxy images with generative adversarial networks

Author(s):  
Kate Storey-Fisher ◽  
Marc Huertas-Company ◽  
Nesar Ramachandra ◽  
Francois Lanusse ◽  
Alexie Leauthaud ◽  
...  

Abstract The problem of anomaly detection in astronomical surveys is becoming increasingly important as data sets grow in size. We present the results of an unsupervised anomaly detection method using a Wasserstein generative adversarial network (WGAN) on nearly one million optical galaxy images in the Hyper Suprime-Cam (HSC) survey. The WGAN learns to generate realistic HSC-like galaxies that follow the distribution of the data set; anomalous images are defined based on a poor reconstruction by the generator and outlying features learned by the discriminator. We find that the discriminator is more attuned to potentially interesting anomalies compared to the generator, and compared to a simpler autoencoder-based anomaly detection approach, so we use the discriminator-selected images to construct a high-anomaly sample of ∼13 000 objects. We propose a new approach to further characterize these anomalous images: we use a convolutional autoencoder to reduce the dimensionality of the residual differences between the real and WGAN-reconstructed images and perform UMAP clustering on these. We report detected anomalies of interest including galaxy mergers, tidal features, and extreme star-forming galaxies. A follow-up spectroscopic analysis of one of these anomalies is detailed in the Appendix; we find that it is an unusual system most likely to be a metal-poor dwarf galaxy with an extremely blue, higher-metallicity H ii region. We have released a catalog with the WGAN anomaly scores; the code and catalog are available at https://github.com/kstoreyf/anomalies-GAN-HSC, and our interactive visualization tool for exploring the clustered data is at https://weirdgalaxi.es.

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 245
Author(s):  
Konstantinos G. Liakos ◽  
Georgios K. Georgakilas ◽  
Fotis C. Plessas ◽  
Paris Kitsos

A significant problem in the field of hardware security consists of hardware trojan (HT) viruses. The insertion of HTs into a circuit can be applied for each phase of the circuit chain of production. HTs degrade the infected circuit, destroy it or leak encrypted data. Nowadays, efforts are being made to address HTs through machine learning (ML) techniques, mainly for the gate-level netlist (GLN) phase, but there are some restrictions. Specifically, the number and variety of normal and infected circuits that exist through the free public libraries, such as Trust-HUB, are based on the few samples of benchmarks that have been created from circuits large in size. Thus, it is difficult, based on these data, to develop robust ML-based models against HTs. In this paper, we propose a new deep learning (DL) tool named Generative Artificial Intelligence Netlists SynthesIS (GAINESIS). GAINESIS is based on the Wasserstein Conditional Generative Adversarial Network (WCGAN) algorithm and area–power analysis features from the GLN phase and synthesizes new normal and infected circuit samples for this phase. Based on our GAINESIS tool, we synthesized new data sets, different in size, and developed and compared seven ML classifiers. The results demonstrate that our new generated data sets significantly enhance the performance of ML classifiers compared with the initial data set of Trust-HUB.


2022 ◽  
Vol 132 ◽  
pp. 01016
Author(s):  
Juan Montenegro ◽  
Yeojin Chung

Advancements in security have provided ways of recording anomalies of daily life through video surveillance. For the present investigation, a semi-supervised generative adversarial network model to detect and classify different types of crimes on videos. Additionally, we intend to tackle one of the most recurring difficulties of anomaly detection: illumination. For this, we propose a light augmentation algorithm based on gamma correction to help the semi-supervised generative adversarial networks on its classification task. The proposed process performs slightly better than other proposed models.


Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3269 ◽  
Author(s):  
Hongmin Gao ◽  
Dan Yao ◽  
Mingxia Wang ◽  
Chenming Li ◽  
Haiyun Liu ◽  
...  

Hyperspectral remote sensing images (HSIs) have great research and application value. At present, deep learning has become an important method for studying image processing. The Generative Adversarial Network (GAN) model is a typical network of deep learning developed in recent years and the GAN model can also be used to classify HSIs. However, there are still some problems in the classification of HSIs. On the one hand, due to the existence of different objects with the same spectrum phenomenon, if only according to the original GAN model to generate samples from spectral samples, it will produce the wrong detailed characteristic information. On the other hand, the gradient disappears in the original GAN model and the scoring ability of a single discriminator limits the quality of the generated samples. In order to solve the above problems, we introduce the scoring mechanism of multi-discriminator collaboration and complete semi-supervised classification on three hyperspectral data sets. Compared with the original GAN model with a single discriminator, the adjusted criterion is more rigorous and accurate and the generated samples can show more accurate characteristics. Aiming at the pattern collapse and diversity deficiency of the original GAN generated by single discriminator, this paper proposes a multi-discriminator generative adversarial networks (MDGANs) and studies the influence of the number of discriminators on the classification results. The experimental results show that the introduction of multi-discriminator improves the judgment ability of the model, ensures the effect of generating samples, solves the problem of noise in generating spectral samples and can improve the classification effect of HSIs. At the same time, the number of discriminators has different effects on different data sets.


2021 ◽  
Vol 13 (18) ◽  
pp. 3554
Author(s):  
Xiaowei Hu ◽  
Weike Feng ◽  
Yiduo Guo ◽  
Qiang Wang

Even though deep learning (DL) has achieved excellent results on some public data sets for synthetic aperture radar (SAR) automatic target recognition(ATR), several problems exist at present. One is the lack of transparency and interpretability for most of the existing DL networks. Another is the neglect of unknown target classes which are often present in practice. To solve the above problems, a deep generation as well as recognition model is derived based on Conditional Variational Auto-encoder (CVAE) and Generative Adversarial Network (GAN). A feature space for SAR-ATR is built based on the proposed CVAE-GAN model. By using the feature space, clear SAR images can be generated with given class labels and observation angles. Besides, the feature of the SAR image is continuous in the feature space and can represent some attributes of the target. Furthermore, it is possible to classify the known classes and reject the unknown target classes by using the feature space. Experiments on the MSTAR data set validate the advantages of the proposed method.


2019 ◽  
Vol 142 (7) ◽  
Author(s):  
Dule Shu ◽  
James Cunningham ◽  
Gary Stump ◽  
Simon W. Miller ◽  
Michael A. Yukish ◽  
...  

Abstract The authors present a generative adversarial network (GAN) model that demonstrates how to generate 3D models in their native format so that they can be either evaluated using complex simulation environments or realized using methods such as additive manufacturing. Once initially trained, the GAN can create additional training data itself by generating new designs, evaluating them in a physics-based virtual environment, and adding the high performing ones to the training set. A case study involving a GAN model that is initially trained on 4045 3D aircraft models is used for demonstration, where a training data set that has been updated with GAN-generated and evaluated designs results in enhanced model generation, in both the geometric feasibility and performance of the designs. Z-tests on the performance scores of the generated aircraft models indicate a statistically significant improvement in the functionality of the generated models after three iterations of the training-evaluation process. In the case study, a number of techniques are explored to structure the generate-evaluate process in order to balance the need to generate feasible designs with the need for innovative designs.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Makoto Naruse ◽  
Takashi Matsubara ◽  
Nicolas Chauvet ◽  
Kazutaka Kanno ◽  
Tianyu Yang ◽  
...  

Abstract Generative adversarial networks (GANs) are becoming increasingly important in the artificial construction of natural images and related functionalities, wherein two types of networks called generators and discriminators evolve through adversarial mechanisms. Using deep convolutional neural networks and related techniques, high-resolution and highly realistic scenes, human faces, etc. have been generated. GANs generally require large amounts of genuine training data sets, as well as vast amounts of pseudorandom numbers. In this study, we utilized chaotic time series generated experimentally by semiconductor lasers for the latent variables of a GAN, whereby the inherent nature of chaos could be reflected or transformed into the generated output data. We show that the similarity in proximity, which describes the robustness of the generated images with respect to minute changes in the input latent variables, is enhanced, while the versatility overall is not severely degraded. Furthermore, we demonstrate that the surrogate chaos time series eliminates the signature of the generated images that is originally observed corresponding to the negative autocorrelation inherent in the chaos sequence. We also address the effects of utilizing chaotic time series to retrieve images from the trained generator.


2021 ◽  
pp. 1-10
Author(s):  
Lei Chen ◽  
Jun Han ◽  
Feng Tian

Fusing the infrared (IR) and visible images has many advantages and can be applied to applications such as target detection and recognition. Colors can give more accurate and distinct features, but the low resolution and low contrast of fused images make this a challenge task. In this paper, we proposed a method based on parallel generative adversarial networks (GANs) to address the challenge. We used IR image, visible image and fusion image as ground truth of ‘L’, ‘a’ and ‘b’ of the Lab model. Through the parallel GANs, we can gain the Lab data which can be converted to RGB image. We adopt TNO and RoadScene data sets to verify our method, and compare with five objective evaluation parameters obtained by other three methods based on deep learning (DL). It is demonstrated that the proposed approach is able to achieve better performance against state-of-arts methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Wafaa Adnan Alsaggaf ◽  
Irfan Mehmood ◽  
Enas Fawai Khairullah ◽  
Samar Alhuraiji ◽  
Maha Farouk S. Sabir ◽  
...  

Surveillance remains an important research area, and it has many applications. Smart surveillance requires a high level of accuracy even when persons are uncooperative. Gait Recognition is the study of recognizing people by the way they walk even when they are unwilling to cooperate. It is another form of a behavioral biometric system in which unique attributes of an individual’s gait are analyzed to determine their identity. On the other hand, one of the big limitations of the gait recognition system is uncooperative environments in which both gallery and probe sets are made under different and unknown walking conditions. In order to tackle this problem, we propose a deep learning-based method that is trained on individuals with the normal walking condition, and to deal with an uncooperative environment and recognize the individual with any dynamic walking conditions, a cycle consistent generative adversarial network is used. This method translates a GEI disturbed from different covariate factors to a normal GEI. It works like unsupervised learning, and during its training, a GEI disrupts from different covariate factors of each individual and acts as a source domain while the normal walking conditions of individuals are our target domain to which translation is required. The cycle consistent GANs automatically find an individual pair with the help of the Cycle Loss function and generate the required GEI, which is tested by the CNN model to predict the person ID. The proposed system is evaluated over a publicly available data set named CASIA-B, and it achieved excellent results. Moreover, this system can be implemented in sensitive areas, like banks, seminar halls (events), airports, embassies, shopping malls, police stations, military areas, and other public service areas for security purposes.


Author(s):  
Tao Zhang ◽  
Long Yu ◽  
Shengwei Tian

In this paper, we presents an apporch for real-world human face close-up images cartoonization. We use generative adversarial network combined with an attention mechanism to convert real-world face pictures and cartoon-style images as unpaired data sets. At present, the image-to-image translation model has been able to successfully transfer style and content. However, some problems still exist in the task of cartoonizing human faces:Hunman face has many details, and the content of the image is easy to lose details after the image is translated. the quality of the image generated by the model is defective. The model in this paper uses the generative adversarial network combined with the attention mechanism, and proposes a new generative adversarial network combined with the attention mechanism to deal with these problems. The channel attention mechanism is embedded between the upper and lower sampling layers of the generator network, to avoid increasing the complexity of the model while conveying the complete details of the underlying information. After comparing the experimental results of FID, PSNR, MSE three indicators and the size of the model parameters, the new model network proposed in this paper avoids the complexity of the model while achieving a good balance in the conversion task of style and content.


Author(s):  
Yubo Liu ◽  
Yihua Luo ◽  
Qiaoming Deng ◽  
Xuanxing Zhou

AbstractThis paper aims to explore the idea and method of using deep learning with a small amount sample to realize campus layout generation. From the perspective of the architect, we construct two small amount sample campus layout data sets through artificial screening with the preference of the specific architects. These data sets are used to train the ability of Pix2Pix model to automatically generate the campus layout under the condition of the given campus boundary and surrounding roads. Through the analysis of the experimental results, this paper finds that under the premise of effective screening of the collected samples, even using a small amount sample data set for deep learning can achieve a good result.


Sign in / Sign up

Export Citation Format

Share Document