style transfer
Recently Published Documents


TOTAL DOCUMENTS

871
(FIVE YEARS 688)

H-INDEX

27
(FIVE YEARS 11)

Author(s):  
Haitong Yang ◽  
Guangyou Zhou ◽  
Tingting He

This article considers the task of text style transfer: transforming a specific style of sentence into another while preserving its style-independent content. A dominate approach to text style transfer is to learn a good content factor of text, define a fixed vector for every style and recombine them to generate text in the required style. In fact, there are a large number of different words to convey the same style from different aspects. Thus, using a fixed vector to represent one style is very inefficient, which causes the weak representation power of the style vector and limits text diversity of the same style. To address this problem, we propose a novel neural generative model called Adversarial Separation Network (ASN), which can learn the content and style vector jointly and the learnt vectors have strong representation power and good interpretabilities. In our method, adversarial learning is implemented to enhance our model’s capability of disentangling the two factors. To evaluate our method, we conduct experiments on two benchmark datasets. Experimental results show our method can perform style transfer better than strong comparison systems. We also demonstrate the strong interpretability of the learnt latent vectors.


2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
Xuhui Fu

With the continuous development and popularization of artificial intelligence technology in recent years, the field of deep learning has also developed relatively rapidly. The application of deep learning technology has attracted attention in image detection, image recognition, image recoloring, and image artistic style transfer. Some image art style transfer techniques with deep learning as the core are also widely used. This article intends to create an image art style transfer algorithm to quickly realize the image art style transfer based on the generation of confrontation network. The principle of generating a confrontation network is mainly to change the traditional deconvolution operation, by adjusting the image size and then convolving, using the content encoder and style encoder to encode the content and style of the selected image, and by extracting the content and style features. In order to enhance the effect of image artistic style transfer, the image is recognized by using a multi-scale discriminator. The experimental results show that this algorithm is effective and has great application and promotion value.


2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
R. Dinesh Kumar ◽  
E. Golden Julie ◽  
Y. Harold Robinson ◽  
S. Vimal ◽  
Gaurav Dhiman ◽  
...  

Humans have mastered the skill of creativity for many decades. The process of replicating this mechanism is introduced recently by using neural networks which replicate the functioning of human brain, where each unit in the neural network represents a neuron, which transmits the messages from one neuron to other, to perform subconscious tasks. Usually, there are methods to render an input image in the style of famous art works. This issue of generating art is normally called nonphotorealistic rendering. Previous approaches rely on directly manipulating the pixel representation of the image. While using deep neural networks which are constructed using image recognition, this paper carries out implementations in feature space representing the higher levels of the content image. Previously, deep neural networks are used for object recognition and style recognition to categorize the artworks consistent with the creation time. This paper uses Visual Geometry Group (VGG16) neural network to replicate this dormant task performed by humans. Here, the images are input where one is the content image which contains the features you want to retain in the output image and the style reference image which contains patterns or images of famous paintings and the input image which needs to be style and blend them together to produce a new image where the input image is transformed to look like the content image but “sketched” to look like the style image.


2022 ◽  
Author(s):  
Nitin Kumar

Abstract In order to solve the problems of poor region delineation and boundary artifacts in Indian style migration of images, an improved Variational Autoencoder (VAE) method for dress style migration is proposed. Firstly, the Yolo v3 model is used to quickly identify the dress localization of the input image, and then the classical semantic segmentation algorithm (FCN) is used to finely delineate the desired dress style migration region twice, and finally the trained VAE model is used to generate the migrated Indian style image using a decision support system. The results show that, compared with the traditional style migration model, the improved VAE style migration model can obtain finer synthetic images for dress style migration, and can adapt to different Indian traditional styles to meet the application requirements of dress style migration scenarios. We evaluated several deep learning based models and achieved BLEU value of 0.6 on average. The transformer-based model outperformed the other models, achieving a BLEU value of up to 0.72.


Author(s):  
Kumarapu Jayaram ◽  
Malhaar Telang ◽  
Ravula Bharath Chandra Reddy ◽  
Yada Arun Kumar ◽  
Kore Shivanagendra Babu ◽  
...  
Keyword(s):  

2022 ◽  
Vol 70 (1) ◽  
pp. 981-997
Author(s):  
Abdollah Amirkhani ◽  
Amir Hossein Barshooi ◽  
Amir Ebrahimi

2021 ◽  
pp. 1-51
Author(s):  
Di Jin ◽  
Zhijing Jin ◽  
Zhiting Hu ◽  
Olga Vechtomova ◽  
Rada Mihalcea

Abstract Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. It has a long history in the field of natural language processing, and recently has re-gained significant attention thanks to the promising performance brought by deep neural models. In this paper, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017. We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data. We also provide discussions on a variety of important topics regarding the future development of this task.


Sign in / Sign up

Export Citation Format

Share Document