High-Quality Sonar Image Generation Algorithm Based on Generative Adversarial Networks

Author(s):  
Zhengyang Wang ◽  
Qingchang Guo ◽  
Min Lei ◽  
Shuxiang Guo ◽  
Xiufen Ye
2020 ◽  
Vol 8 (6) ◽  
pp. 5312-5316

Generative Adversarial Networks (GANs) use deep learning methods like neural nets for generative modeling. Neural style transferring of images and facial character generation of anime images are previously implemented by applying GAN methods but were not successful in giving a promising output. In this work, Image Processing is applied on the datasets in the mode along with the training of GAN system. The problem of applying GAN to generate specific images is addressed by using a clean and problem specific dataset for anime facial character generation. Modeling is done by applying Convolutional Neural Nets, GANs empirically. Neural style transfer, Automatic Anime characters are generated with high-resolution, and this model tackles the limitations by progressively increasing the resolution of both generated images and structural conditions during training. This model can be used to develop unique anime characters or the image generated can be used as inspiration by artists and graphic designers, can be used as filters in famous apps such as snapchat for style transferring. With different evaluations and result analysis, it is observed that this model is a stable and high-quality model.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jiasong Wu ◽  
Xiang Qiu ◽  
Jing Zhang ◽  
Fuzhi Wu ◽  
Youyong Kong ◽  
...  

Generative adversarial networks and variational autoencoders (VAEs) provide impressive image generation from Gaussian white noise, but both are difficult to train, since they need a generator (or encoder) and a discriminator (or decoder) to be trained simultaneously, which can easily lead to unstable training. To solve or alleviate these synchronous training problems of generative adversarial networks (GANs) and VAEs, researchers recently proposed generative scattering networks (GSNs), which use wavelet scattering networks (ScatNets) as the encoder to obtain features (or ScatNet embeddings) and convolutional neural networks (CNNs) as the decoder to generate an image. The advantage of GSNs is that the parameters of ScatNets do not need to be learned, while the disadvantage of GSNs is that their ability to obtain representations of ScatNets is slightly weaker than that of CNNs. In addition, the dimensionality reduction method of principal component analysis (PCA) can easily lead to overfitting in the training of GSNs and, therefore, affect the quality of generated images in the testing process. To further improve the quality of generated images while keeping the advantages of GSNs, this study proposes generative fractional scattering networks (GFRSNs), which use more expressive fractional wavelet scattering networks (FrScatNets), instead of ScatNets as the encoder to obtain features (or FrScatNet embeddings) and use similar CNNs of GSNs as the decoder to generate an image. Additionally, this study develops a new dimensionality reduction method named feature-map fusion (FMF) instead of performing PCA to better retain the information of FrScatNets,; it also discusses the effect of image fusion on the quality of the generated image. The experimental results obtained on the CIFAR-10 and CelebA datasets show that the proposed GFRSNs can lead to better generated images than the original GSNs on testing datasets. The experimental results of the proposed GFRSNs with deep convolutional GAN (DCGAN), progressive GAN (PGAN), and CycleGAN are also given.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 1250-1260
Author(s):  
Muhammad Zeeshan Khan ◽  
Saira Jabeen ◽  
Muhammad Usman Ghani Khan ◽  
Tanzila Saba ◽  
Asim Rehmat ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document