scholarly journals Image Localized Style Transfer to Design Clothes Based on CNN and Interactive Segmentation

2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Hanying Wang ◽  
Haitao Xiong ◽  
Yuanyuan Cai

In recent years, image style transfer has been greatly improved by using deep learning technology. However, when directly applied to clothing style transfer, the current methods cannot allow the users to self-control the local transfer position of an image, such as separating specific T-shirt or trousers from a figure, and cannot achieve the perfect preservation of clothing shape. Therefore, this paper proposes an interactive image localized style transfer method especially for clothes. We introduce additional image called outline image, which is extracted from content image by interactive algorithm. The interaction consists simply of dragging a rectangle around the desired clothing. Then, we introduce an outline loss function based on distance transform of the outline image, which can achieve the perfect preservation of clothing shape. In order to smooth and denoise the boundary region, total variation regularization is employed. The proposed method constrains that the new style is generated only in the desired clothing part rather than the whole image including background. Therefore, in our new generated images, the original clothing shape can be reserved perfectly. Experiment results show impressive generated clothing images and demonstrate that this is a good approach to design clothes.

2019 ◽  
Vol 1314 ◽  
pp. 012191
Author(s):  
Jie Wu ◽  
Jin Duan ◽  
Jinqiang Yu ◽  
Haodong Shi ◽  
Yingchao Li

Author(s):  
Chien-Yu Lu ◽  
Min-Xin Xue ◽  
Chia-Che Chang ◽  
Che-Rung Lee ◽  
Li Su

Style transfer of polyphonic music recordings is a challenging task when considering the modeling of diverse, imaginative, and reasonable music pieces in the style different from their original one. To achieve this, learning stable multi-modal representations for both domain-variant (i.e., style) and domaininvariant (i.e., content) information of music in an unsupervised manner is critical. In this paper, we propose an unsupervised music style transfer method without the need for parallel data. Besides, to characterize the multi-modal distribution of music pieces, we employ the Multi-modal Unsupervised Image-to-Image Translation (MUNIT) framework in the proposed system. This allows one to generate diverse outputs from the learned latent distributions representing contents and styles. Moreover, to better capture the granularity of sound, such as the perceptual dimensions of timbre and the nuance in instrument-specific performance, cognitively plausible features including mel-frequency cepstral coefficients (MFCC), spectral difference, and spectral envelope, are combined with the widely-used mel-spectrogram into a timbreenhanced multi-channel input representation. The Relativistic average Generative Adversarial Networks (RaGAN) is also utilized to achieve fast convergence and high stability. We conduct experiments on bilateral style transfer tasks among three different genres, namely piano solo, guitar solo, and string quartet. Results demonstrate the advantages of the proposed method in music style transfer with improved sound quality and in allowing users to manipulate the output.


2020 ◽  
pp. paper2-1-paper2-11
Author(s):  
Victor Kitov ◽  
Konstantin Kozlovtsev ◽  
Margarita Mishustina

Style transfer is the process of rendering one image with some content in the style of another image, representing the style. Recent studies of Liu et al. (2017) show that traditional style transfer methods of Gatys et al. (2016) and Johnson et al.(2016) fail to reproduce the depth of the content image, which is critical for human perception. They suggest to preserve the depth map by additional regularizer in the optimized loss function, forcing preservation of the depth map. However these traditional methods are either computationally inefficient or require training a separate neural network for each style. AdaIN method of Huang et al. (2017) allows efficient transferring of arbitrary style without training a separate model but is not able to reproduce the depth map of the content image. We propose an extension to this method, allowing depth map preservation by applying variable stylization strength. Qualitative analysis and results of user evaluation study indicate that the proposed method provides better stylizations, compared to the original AdaIN style transfer method.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Huaijun Wang ◽  
Dandan Du ◽  
Junhuai Li ◽  
Wenchao Ji ◽  
Lei Yu

Motion capture technology plays an important role in the production field of film and television, animation, etc. In order to reduce the cost of data acquisition and improve the reuse rate of motion capture data and the effect of movement style migration, the synthesis technology of motion capture data in human movement has become a research hotspot in this field. In this paper, kinematic constraints (KC) and cyclic consistency (CC) network are employed to study the methods of kinematic style migration. Firstly, cycle-consistent adversarial network (CCycleGAN) is constructed, and the motion style migration network based on convolutional self-encoder is used as a generator to establish the cyclic consistent constraint between the generated motion and the content motion, so as to improve the action consistency between the generated motion and the content motion and eliminate the lag phenomenon of the generated motion. Then, kinematic constraints are introduced to normalize the movement generation, so as to solve the problems such as jitter and sliding step in the movement style migration results. Experimental results show that the generated motion of the cyclic consistent style transfer method with kinematic constraints is more similar to the style of style motion, which improves the effect of motion style transfer.


2021 ◽  
Vol 33 (4) ◽  
pp. 1343
Author(s):  
Liming Huang ◽  
Ping Wang ◽  
Cheng-Fu Yang ◽  
Hsien-Wei Tseng

2021 ◽  
pp. 15-26
Author(s):  
Li Haisheng ◽  
Huang Huafeng ◽  
Xue Fan

2012 ◽  
Vol 182-183 ◽  
pp. 1065-1068 ◽  
Author(s):  
Ji Ying Li ◽  
Jian Wu Dang

Traditional Live Wire algorithm distinguished the strength edge of objectives uneasily and executive speed of algorithm is slow. For these problems, an improved Live-Wire algorithm is proposed. First it implements anisotropic diffusion filtering to images and constructs a new expense function, then combined with region growing segmentation algorithm, it implements interactive segmentation to medical images. Improved algorithm avoids the shortcomings of the traditional Live-wire algorithm which is sensitive to noise and can not effectively distinguish the edge of the strength, also reduces the time and blindness of dynamic programming to find the optimal path and improves the accuracy and implementation efficiency of the image segmentation.


Sign in / Sign up

Export Citation Format

Share Document