Hybrid camouflage pattern generation using neural style transfer method

Author(s):  
Elaheh Daneshvar ◽  
Mohammad Amani Tehran ◽  
Yu‐Jin Zhang
2019 ◽  
Vol 1314 ◽  
pp. 012191
Author(s):  
Jie Wu ◽  
Jin Duan ◽  
Jinqiang Yu ◽  
Haodong Shi ◽  
Yingchao Li

Author(s):  
Chien-Yu Lu ◽  
Min-Xin Xue ◽  
Chia-Che Chang ◽  
Che-Rung Lee ◽  
Li Su

Style transfer of polyphonic music recordings is a challenging task when considering the modeling of diverse, imaginative, and reasonable music pieces in the style different from their original one. To achieve this, learning stable multi-modal representations for both domain-variant (i.e., style) and domaininvariant (i.e., content) information of music in an unsupervised manner is critical. In this paper, we propose an unsupervised music style transfer method without the need for parallel data. Besides, to characterize the multi-modal distribution of music pieces, we employ the Multi-modal Unsupervised Image-to-Image Translation (MUNIT) framework in the proposed system. This allows one to generate diverse outputs from the learned latent distributions representing contents and styles. Moreover, to better capture the granularity of sound, such as the perceptual dimensions of timbre and the nuance in instrument-specific performance, cognitively plausible features including mel-frequency cepstral coefficients (MFCC), spectral difference, and spectral envelope, are combined with the widely-used mel-spectrogram into a timbreenhanced multi-channel input representation. The Relativistic average Generative Adversarial Networks (RaGAN) is also utilized to achieve fast convergence and high stability. We conduct experiments on bilateral style transfer tasks among three different genres, namely piano solo, guitar solo, and string quartet. Results demonstrate the advantages of the proposed method in music style transfer with improved sound quality and in allowing users to manipulate the output.


2020 ◽  
pp. paper2-1-paper2-11
Author(s):  
Victor Kitov ◽  
Konstantin Kozlovtsev ◽  
Margarita Mishustina

Style transfer is the process of rendering one image with some content in the style of another image, representing the style. Recent studies of Liu et al. (2017) show that traditional style transfer methods of Gatys et al. (2016) and Johnson et al.(2016) fail to reproduce the depth of the content image, which is critical for human perception. They suggest to preserve the depth map by additional regularizer in the optimized loss function, forcing preservation of the depth map. However these traditional methods are either computationally inefficient or require training a separate neural network for each style. AdaIN method of Huang et al. (2017) allows efficient transferring of arbitrary style without training a separate model but is not able to reproduce the depth map of the content image. We propose an extension to this method, allowing depth map preservation by applying variable stylization strength. Qualitative analysis and results of user evaluation study indicate that the proposed method provides better stylizations, compared to the original AdaIN style transfer method.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Huaijun Wang ◽  
Dandan Du ◽  
Junhuai Li ◽  
Wenchao Ji ◽  
Lei Yu

Motion capture technology plays an important role in the production field of film and television, animation, etc. In order to reduce the cost of data acquisition and improve the reuse rate of motion capture data and the effect of movement style migration, the synthesis technology of motion capture data in human movement has become a research hotspot in this field. In this paper, kinematic constraints (KC) and cyclic consistency (CC) network are employed to study the methods of kinematic style migration. Firstly, cycle-consistent adversarial network (CCycleGAN) is constructed, and the motion style migration network based on convolutional self-encoder is used as a generator to establish the cyclic consistent constraint between the generated motion and the content motion, so as to improve the action consistency between the generated motion and the content motion and eliminate the lag phenomenon of the generated motion. Then, kinematic constraints are introduced to normalize the movement generation, so as to solve the problems such as jitter and sliding step in the movement style migration results. Experimental results show that the generated motion of the cyclic consistent style transfer method with kinematic constraints is more similar to the style of style motion, which improves the effect of motion style transfer.


2021 ◽  
Vol 33 (4) ◽  
pp. 1343
Author(s):  
Liming Huang ◽  
Ping Wang ◽  
Cheng-Fu Yang ◽  
Hsien-Wei Tseng

2021 ◽  
pp. 15-26
Author(s):  
Li Haisheng ◽  
Huang Huafeng ◽  
Xue Fan

2021 ◽  
Vol 2029 (1) ◽  
pp. 012118
Author(s):  
Xiaoteng Zhou ◽  
Changli Yu ◽  
Xin Yuan ◽  
Citong Luo

Sign in / Sign up

Export Citation Format

Share Document