Specific Area Style Transfer on Real-Time Video
Since deep learning applications in object recognition, object detection, segmentation, and image generation are needed increasingly, related research has been actively conducted. In this paper, using segmentation and style transfer together, a method of producing desired images in the desired area in real-time video is proposed. Two deep neural networks were used to enable as possible as in real-time with the trade-off relationship between speed and accuracy. Modified BiSeNet for segmentation and CycleGAN for style transfer were processed on a desktop PC equipped with two RTX-2080-Ti GPU boards. This enables real-time processing over SD video in decent level. We obtained good results in subjective quality to segment Road area in city street video and change into the Grass style at no less than 6(fps).