scholarly journals Image Translation Method for Game Character Sprite Drawing

2022 ◽  
Vol 130 (3) ◽  
pp. 1-16
Author(s):  
Jong-In Choi ◽  
Soo-Kyun Kim ◽  
Shin-Jin Kang
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 60338-60343 ◽  
Author(s):  
Yu Li ◽  
Randi Fu ◽  
Xiangchao Meng ◽  
Wei Jin ◽  
Feng Shao

2020 ◽  
Vol 18 (8) ◽  
pp. 9-17
Author(s):  
Sung-Woon Jung ◽  
Hyuk-Ju Kwon ◽  
Young-Choon Kim ◽  
Sang-Ho Ahn ◽  
Sung-Hak Lee

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Xu Yin ◽  
Yan Li ◽  
Byeong-Seok Shin

The image-to-image translation method aims to learn inter-domain mappings from paired/unpaired data. Although this technique has been widely used for visual predication tasks—such as classification and image segmentation—and achieved great results, we still failed to perform flexible translations when attempting to learn different mappings, especially for images containing multiple instances. To tackle this problem, we propose a generative framework DAGAN (Domain-aware Generative Adversarial etwork) that enables domains to learn diverse mapping relationships. We assumed that an image is composed with background and instance domain and then fed them into different translation networks. Lastly, we integrated the translated domains into a complete image with smoothed labels to maintain realism. We examined the instance-aware framework on datasets generated by YOLO and confirmed that this is capable of generating images of equal or better diversity compared to current translation models.


2020 ◽  
Vol 891 (1) ◽  
pp. L4 ◽  
Author(s):  
Eunsu Park ◽  
Yong-Jae Moon ◽  
Daye Lim ◽  
Harim Lee

Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3066
Author(s):  
Do-Yeon Hwang ◽  
Seok-Hwan Choi ◽  
Jinmyeong Shin ◽  
Moonkyu Kim ◽  
Yoon-Ho Choi

In this paper, we propose a new deep learning-based image translation method to predict and generate images after hair transplant surgery from images before hair transplant surgery. Since existing image translation models use a naive strategy that trains the whole distribution of translation, the image translation models using the original image as the input data result in converting not only the hair transplant surgery region, which is the region of interest (ROI) for image translation, but also the other image regions, which are not the ROI. To solve this problem, we proposed a novel generative adversarial network (GAN)-based ROI image translation method, which converts only the ROI and retains the image for the non-ROI. Specifically, by performing image translation and image segmentation independently, the proposed method generates predictive images from the distribution of images after hair transplant surgery and specifies the ROI to be used for generated images. In addition, by applying the ensemble method to image segmentation, we propose a more robust method through complementing the shortages of various image segmentation models. From the experimental results using a real medical image dataset, e.g., 1394 images before hair transplantation and 896 images after hair transplantation, to train the GAN model, we show that the proposed GAN-based ROI image translation method performed better than the other GAN-based image translation methods, e.g., by 23% in SSIM (Structural Similarity Index Measure), 452% in IoU (Intersection over Union), and 42% in FID (Frechet Inception Distance), on average. Furthermore, the ensemble method that we propose not only improves ROI detection performance but also shows consistent performances in generating better predictive images from preoperative images taken from diverse angles.


2019 ◽  
Vol 9 (22) ◽  
pp. 4780 ◽  
Author(s):  
Jaechang Yoo ◽  
Heesong Eom ◽  
Yong Suk Choi

Recently, several studies have focused on image-to-image translation. However, the quality of the translation results is lacking in certain respects. We propose a new image-to-image translation method to minimize such shortcomings using an auto-encoder and an auto-decoder. This method includes pre-training two auto-encoders and decoder pairs for each source and target image domain, cross-connecting two pairs and adding a feature mapping layer. Our method is quite simple and straightforward to adopt but very effective in practice, and we experimentally demonstrated that our method can significantly enhance the quality of image-to-image translation. We used the well-known cityscapes, horse2zebra, cat2dog, maps, summer2winter, and night2day datasets. Our method shows qualitative and quantitative improvements over existing models.


Author(s):  
Neil Rowlands ◽  
Jeff Price ◽  
Michael Kersker ◽  
Seichi Suzuki ◽  
Steve Young ◽  
...  

Three-dimensional (3D) microstructure visualization on the electron microscope requires that the sample be tilted to different positions to collect a series of projections. This tilting should be performed rapidly for on-line stereo viewing and precisely for off-line tomographic reconstruction. Usually a projection series is collected using mechanical stage tilt alone. The stereo pairs must be viewed off-line and the 60 to 120 tomographic projections must be aligned with fiduciary markers or digital correlation methods. The delay in viewing stereo pairs and the alignment problems in tomographic reconstruction could be eliminated or improved by tilting the beam if such tilt could be accomplished without image translation.A microscope capable of beam tilt with simultaneous image shift to eliminate tilt-induced translation has been investigated for 3D imaging of thick (1 μm) biologic specimens. By tilting the beam above and through the specimen and bringing it back below the specimen, a brightfield image with a projection angle corresponding to the beam tilt angle can be recorded (Fig. 1a).


Sign in / Sign up

Export Citation Format

Share Document