scholarly journals High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis

Author(s):  
Chao Yang ◽  
Xin Lu ◽  
Zhe Lin ◽  
Eli Shechtman ◽  
Oliver Wang ◽  
...  
Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1370 ◽  
Author(s):  
Tingzhu Sun ◽  
Weidong Fang ◽  
Wei Chen ◽  
Yanxin Yao ◽  
Fangming Bi ◽  
...  

Although image inpainting based on the generated adversarial network (GAN) has made great breakthroughs in accuracy and speed in recent years, they can only process low-resolution images because of memory limitations and difficulty in training. For high-resolution images, the inpainted regions become blurred and the unpleasant boundaries become visible. Based on the current advanced image generation network, we proposed a novel high-resolution image inpainting method based on multi-scale neural network. This method is a two-stage network including content reconstruction and texture detail restoration. After holding the visually believable fuzzy texture, we further restore the finer details to produce a smoother, clearer, and more coherent inpainting result. Then we propose a special application scene of image inpainting, that is, to delete the redundant pedestrians in the image and ensure the reality of background restoration. It involves pedestrian detection, identifying redundant pedestrians and filling in them with the seemingly correct content. To improve the accuracy of image inpainting in the application scene, we proposed a new mask dataset, which collected the characters in COCO dataset as a mask. Finally, we evaluated our method on COCO and VOC dataset. the experimental results show that our method can produce clearer and more coherent inpainting results, especially for high-resolution images, and the proposed mask dataset can produce better inpainting results in the special application scene.


Optik ◽  
2015 ◽  
Vol 126 (13) ◽  
pp. 1269-1276 ◽  
Author(s):  
Jian-Nong Cao ◽  
Zhenfeng Shao ◽  
Jia Guo ◽  
Bei Wang ◽  
Yuwei Dong ◽  
...  

2020 ◽  
pp. short18-1-short18-9
Author(s):  
Andrey Moskalenko ◽  
Mikhail Erofeev ◽  
Dmitriy Vatolin

In recent years, the field of image inpainting has developed rapidly, learning-based approaches show impressive results in the task of filling missing parts in an image. But most deep methods are strongly tied to the resolution of the images on which they were trained. A slight resolution increase leads to serious artifacts and unsatisfactory filling quality. These methods are therefore unsuitable for interactive image processing. In this article, we propose a method that solves the problem of inpainting arbitrary-size images. We also describe a way to better restore texture fragments in the filled area. For this, we propose to use information from neighboring pixels by shifting the original image in four directions. Moreover, this approach can work with existing inpainting models, making them almost resolution-independent without the need for retraining. We also created a GIMP plugin that implements our technique. The plugin, code, and model weights are available at https://github.com/a-mos/High Resolution Image Inpainting.


2008 ◽  
Vol 11 (1) ◽  
pp. 31-37 ◽  
Author(s):  
Jian Wang ◽  
Jixian Zhang ◽  
Zhengjun Liu

Sign in / Sign up

Export Citation Format

Share Document