image generation
Recently Published Documents


TOTAL DOCUMENTS

1309
(FIVE YEARS 515)

H-INDEX

41
(FIVE YEARS 10)

2022 ◽  
Vol 74 ◽  
pp. 103491
Author(s):  
Hemin Ali Qadir ◽  
Ilangko Balasingham ◽  
Younghak Shin
Keyword(s):  

Author(s):  
Jiang Chang ◽  
Shengqi Guan

In order to solve the problem of dataset expansion in deep learning tasks such as image classification, this paper proposed an image generation model called Class Highlight Generative Adversarial Networks (CH-GANs). In order to highlight image categories, accelerate the convergence speed of the model and generate true-to-life images with clear categories, first, the image category labels were deconvoluted and integrated into the generator through [Formula: see text] convolution. Second, a novel discriminator that cannot only judge the authenticity of the image but also the image category was designed. Finally, in order to quickly and accurately classify strip steel defects, the lightweight image classification network GhostNet was appropriately improved by modifying the number of network layers and the number of network channels, adding SE modules, etc., and was trained on the dataset expanded by CH-GAN. In the comparative experiments, the average FID of CH-GAN is 7.59; the accuracy of the improved GhostNet is 95.67% with 0.19[Formula: see text]M parameters. The experimental results prove the effectiveness and superiority of the methods proposed in this paper in the generation and classification of strip steel defect images.


2022 ◽  
Author(s):  
Shril Mody ◽  
Janvi Thakkar
Keyword(s):  

IEEE Access ◽  
2022 ◽  
pp. 1-1
Author(s):  
Mingle Xu ◽  
Yongchae Jeong ◽  
Dong Sun Park ◽  
Sook Yoon

2022 ◽  
pp. 191-219
Author(s):  
Gang Hua ◽  
Dongdong Chen
Keyword(s):  

Author(s):  
Yawen Liu ◽  
Haijun Niu ◽  
Pengling Ren ◽  
Jialiang Ren ◽  
Xuan Wei ◽  
...  

Abstract Objective: The generation of quantification maps and weighted images in synthetic MRI techniques is based on complex fitting equations. This process requires longer image generation times. The objective of this study is to evaluate the feasibility of deep learning method for fast reconstruction of synthetic MRI. Approach: A total of 44 healthy subjects were recruited and random divided into a training set (30 subjects) and a testing set (14 subjects). A multiple-dynamic, multiple-echo (MDME) sequence was used to acquire synthetic MRI images. Quantification maps (T1, T2, and proton density (PD) maps) and weighted (T1W, T2W, and T2W FLAIR) images were created with MAGiC software and then used as the ground truth images in the deep learning (DL) model. An improved multichannel U-Net structure network was trained to generate quantification maps and weighted images from raw synthetic MRI imaging data (8 module images). Quantitative evaluation was performed on quantification maps. Quantitative evaluation metrics, as well as qualitative evaluation were used in weighted image evaluation. Nonparametric Wilcoxon signed-rank tests were performed in this study. Main results: The results of quantitative evaluation show that the error between the generated quantification images and the reference images is small. For weighted images, no significant difference in overall image quality or SNR was identified between DL images and synthetic images. Notably, the DL images achieved improved image contrast with T2W images, and fewer artifacts were present on DL images than synthetic images acquired by T2W FLAIR. Significance: The DL algorithm provides a promising method for image generation in synthetic MRI techniques, in which every step of the calculation can be optimized and faster, thereby simplifying the workflow of synthetic MRI techniques.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Peng Liu ◽  
Fuyu Li ◽  
Shanshan Yuan ◽  
Wanyi Li

Object detection in thermal images is an important computer vision task and has many applications such as unmanned vehicles, robotics, surveillance, and night vision. Deep learning-based detectors have achieved major progress, which usually need large amount of labelled training data. However, labelled data for object detection in thermal images is scarce and expensive to collect. How to take advantage of the large number labelled visible images and adapt them into thermal image domain is expected to solve. This paper proposes an unsupervised image-generation enhanced adaptation method for object detection in thermal images. To reduce the gap between visible domain and thermal domain, the proposed method manages to generate simulated fake thermal images that are similar to the target images and preserves the annotation information of the visible source domain. The image generation includes a CycleGAN-based image-to-image translation and an intensity inversion transformation. Generated fake thermal images are used as renewed source domain, and then the off-the-shelf domain adaptive faster RCNN is utilized to reduce the gap between the generated intermediate domain and the thermal target domain. Experiments demonstrate the effectiveness and superiority of the proposed method.


2021 ◽  
Vol 7 ◽  
pp. e761
Author(s):  
Yuling He ◽  
Yingding Zhao ◽  
Wenji Yang ◽  
Yilu Xu

Due to the sophisticated entanglements for non-rigid deformation, generating person images from source pose to target pose is a challenging work. In this paper, we present a novel framework to generate person images with shape consistency and appearance consistency. The proposed framework leverages the graph network to infer the global relationship of source pose and target pose in a graph for better pose transfer. Moreover, we decompose the source image into different attributes (e.g., hair, clothes, pants and shoes) and combine them with the pose coding through operation method to generate a more realistic person image. We adopt an alternate updating strategy to promote mutual guidance between pose modules and appearance modules for better person image quality. Qualitative and quantitative experiments were carried out on the DeepFashion dateset. The efficacy of the presented framework are verified.


2021 ◽  
Author(s):  
Jialu Huang ◽  
Ying Huang ◽  
Yan-ting Lin ◽  
Zi-yang Liu ◽  
Yang Lin ◽  
...  

Author(s):  
Zhao Qiu ◽  
Lin Yuan ◽  
Lihao Liu ◽  
Zheng Yuan ◽  
Tao Chen ◽  
...  

The image generation and completion model complement the missing area of the image to be repaired according to the image itself or the information of the image library so that the repaired image looks very natural and difficult to distinguish from the undamaged image. The difficulty of image generation and completion lies in the reasonableness of image semantics and the clear and true texture of the generated image. In this paper, a Wasserstein generative adversarial network with dilated convolution and deformable convolution (DDC-WGAN) is proposed for image completion. A deformable offset is added based on dilated convolution, which enlarges the receptive field and provides a more stable representation of geometric deformation. Experiments show that the DDC-WGAN method proposed in this paper has better performance in image generation and complementation than the traditional generative adversarial complementation network.


Sign in / Sign up

Export Citation Format

Share Document