Style Transfer with Content Preservation from Multiple Images

Author(s):  
Dilin Liu ◽  
Wei Yu ◽  
Hongxun Yao
2020 ◽  
Vol 10 (18) ◽  
pp. 6196
Author(s):  
Chunhua Wu ◽  
Xiaolong Chen ◽  
Xingbiao Li

Currently, most text style transfer methods encode the text into a style-independent latent representation and decode it into new sentences with the target style. Due to the limitation of the latent representation, previous works can hardly get satisfactory target style sentence especially in terms of semantic remaining of the original sentence. We propose a “Mask and Generation” structure, which can obtain an explicit representation of the content of original sentence and generate the target sentence with a transformer. This explicit representation is a masked text that masks the words with the strong style attribute in the sentence. Therefore, it can preserve most of the semantic meaning of the original sentence. In addition, as it is the input of the generator, it also simplified this process compared to the current work who generate the target sentence from scratch. As the explicit representation is readable and the model has better interpretability, we can clearly know which words changed and why the words changed. We evaluate our model on two review datasets with quantitative, qualitative, and human evaluations. The experimental results show that our model generally outperform other methods in terms of transfer accuracy and content preservation.


2019 ◽  
Author(s):  
Katy Gero ◽  
Chris Kedzie ◽  
Jonathan Reeve ◽  
Lydia Chilton

Author(s):  
Fuli Luo ◽  
Peng Li ◽  
Jie Zhou ◽  
Pengcheng Yang ◽  
Baobao Chang ◽  
...  

Unsupervised text style transfer aims to transfer the underlying style of text but keep its main content unchanged without parallel data. Most existing methods typically follow two steps: first separating the content from the original style, and then fusing the content with the desired style. However, the separation in the first step is challenging because the content and style interact in subtle ways in natural language. Therefore, in this paper, we propose a dual reinforcement learning framework to directly transfer the style of the text via a one-step mapping model, without any separation of content and style. Specifically, we consider the learning of the source-to-target and target-to-source mappings as a dual task, and two rewards are designed based on such a dual structure to reflect the style accuracy and content preservation, respectively. In this way, the two one-step mapping models can be trained via reinforcement learning, without any use of parallel data. Automatic evaluations show that our model outperforms the state-of-the-art systems by a large margin, especially with more than 10 BLEU points improvement averaged on two benchmark datasets. Human evaluations also validate the effectiveness of our model in terms of style accuracy, content preservation and fluency. Our code and data, including outputs of all baselines and our model are available at https://github.com/luofuli/DualRL.


Author(s):  
Xiaoyuan Yi ◽  
Zhenghao Liu ◽  
Wenhao Li ◽  
Maosong Sun

Text style transfer pursues altering the style of a sentence while remaining its main content unchanged. Due to the lack of parallel corpora, most recent work focuses on unsupervised methods and has achieved noticeable progress. Nonetheless, the intractability of completely disentangling content from style for text leads to a contradiction of content preservation and style transfer accuracy. To address this problem, we propose a style instance supported method, StyIns. Instead of representing styles with embeddings or latent variables learned from single sentences, our model leverages the generative flow technique to extract underlying stylistic properties from multiple instances of each style, which form a more discriminative and expressive latent style space. By combining such a space with the attention-based structure, our model can better maintain the content and simultaneously achieve high transfer accuracy. Furthermore, the proposed method can be flexibly extended to semi-supervised learning so as to utilize available limited paired data. Experiments on three transfer tasks, sentiment modification, formality rephrasing, and poeticness generation, show that StyIns obtains a better balance between content and style, outperforming several recent baselines.


Author(s):  
Di Yin ◽  
Shujian Huang ◽  
Xin-Yu Dai ◽  
Jiajun Chen

Text style transfer aims to rephrase a given sentence into a different style without changing its original content. Since parallel corpora (i.e. sentence pairs with the same content but different styles) are usually unavailable, most previous works solely guide the transfer process with distributional information, i.e. using style-related classifiers or language models, which neglect the correspondence of instances, leading to poor transfer performance, especially for the content preservation. In this paper, we propose making partial comparisons to explicitly model the content and style correspondence of instances, respectively. To train the partial comparators, we propose methods to extract partial-parallel training instances automatically from the non-parallel data, and to further enhance the training process by using data augmentation. We perform experiments that compare our method to other existing approaches on two review datasets. Both automatic and manual evaluations show that our approach can significantly improve the performance of existing adversarial methods, and outperforms most state-of-the-art models. Our code and data will be available on Github.


2021 ◽  
Vol 12 (3) ◽  
pp. 1-16
Author(s):  
Yukai Shi ◽  
Sen Zhang ◽  
Chenxing Zhou ◽  
Xiaodan Liang ◽  
Xiaojun Yang ◽  
...  

Non-parallel text style transfer has attracted increasing research interests in recent years. Despite successes in transferring the style based on the encoder-decoder framework, current approaches still lack the ability to preserve the content and even logic of original sentences, mainly due to the large unconstrained model space or too simplified assumptions on latent embedding space. Since language itself is an intelligent product of humans with certain grammars and has a limited rule-based model space by its nature, relieving this problem requires reconciling the model capacity of deep neural networks with the intrinsic model constraints from human linguistic rules. To this end, we propose a method called Graph Transformer–based Auto-Encoder, which models a sentence as a linguistic graph and performs feature extraction and style transfer at the graph level, to maximally retain the content and the linguistic structure of original sentences. Quantitative experiment results on three non-parallel text style transfer tasks show that our model outperforms state-of-the-art methods in content preservation, while achieving comparable performance on transfer accuracy and sentence naturalness.


Author(s):  
Minxuan Lin ◽  
Fan Tang ◽  
Weiming Dong ◽  
Xiao Li ◽  
Changsheng Xu ◽  
...  

Multimodal and multi-domain stylization are two important problems in the field of image style transfer. Currently, there are few methods that can perform multimodal and multi-domain stylization simultaneously. In this study, we propose a unified framework for multimodal and multi-domain style transfer with the support of both exemplar-based reference and randomly sampled guidance. The key component of our method is a novel style distribution alignment module that eliminates the explicit distribution gaps between various style domains and reduces the risk of mode collapse. The multimodal diversity is ensured by either guidance from multiple images or random style codes, while the multi-domain controllability is directly achieved by using a domain label. We validate our proposed framework on painting style transfer with various artistic styles and genres. Qualitative and quantitative comparisons with state-of-the-art methods demonstrate that our method can generate high-quality results of multi-domain styles and multimodal instances from reference style guidance or a random sampled style.


Author(s):  
J.R. McIntosh ◽  
D.L. Stemple ◽  
William Bishop ◽  
G.W. Hannaway

EM specimens often contain 3-dimensional information that is lost during micrography on a single photographic film. Two images of one specimen at appropriate orientations give a stereo view, but complex structures composed of multiple objects of graded density that superimpose in each projection are often difficult to decipher in stereo. Several analytical methods for 3-D reconstruction from multiple images of a serially tilted specimen are available, but they are all time-consuming and computationally intense.


Author(s):  
J.M. Cowley

The HB5 STEM instrument at ASU has been modified previously to include an efficient two-dimensional detector incorporating an optical analyser device and also a digital system for the recording of multiple images. The detector system was built to explore a wide range of possibilities including in-line electron holography, the observation and recording of diffraction patterns from very small specimen regions (having diameters as small as 3Å) and the formation of both bright field and dark field images by detection of various portions of the diffraction pattern. Experience in the use of this system has shown that sane of its capabilities are unique and valuable. For other purposes it appears that, while the principles of the operational modes may be verified, the practical applications are limited by the details of the initial design.


Sign in / Sign up

Export Citation Format

Share Document