Method for Enhancing High-Resolution Image Inpainting with Two-Stage Approach

2021 ◽  
Vol 47 (3) ◽  
pp. 201-206
Author(s):  
A. Moskalenko ◽  
M. Erofeev ◽  
D. Vatolin
2020 ◽  
pp. short18-1-short18-9
Author(s):  
Andrey Moskalenko ◽  
Mikhail Erofeev ◽  
Dmitriy Vatolin

In recent years, the field of image inpainting has developed rapidly, learning-based approaches show impressive results in the task of filling missing parts in an image. But most deep methods are strongly tied to the resolution of the images on which they were trained. A slight resolution increase leads to serious artifacts and unsatisfactory filling quality. These methods are therefore unsuitable for interactive image processing. In this article, we propose a method that solves the problem of inpainting arbitrary-size images. We also describe a way to better restore texture fragments in the filled area. For this, we propose to use information from neighboring pixels by shifting the original image in four directions. Moreover, this approach can work with existing inpainting models, making them almost resolution-independent without the need for retraining. We also created a GIMP plugin that implements our technique. The plugin, code, and model weights are available at https://github.com/a-mos/High Resolution Image Inpainting.


Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1370 ◽  
Author(s):  
Tingzhu Sun ◽  
Weidong Fang ◽  
Wei Chen ◽  
Yanxin Yao ◽  
Fangming Bi ◽  
...  

Although image inpainting based on the generated adversarial network (GAN) has made great breakthroughs in accuracy and speed in recent years, they can only process low-resolution images because of memory limitations and difficulty in training. For high-resolution images, the inpainted regions become blurred and the unpleasant boundaries become visible. Based on the current advanced image generation network, we proposed a novel high-resolution image inpainting method based on multi-scale neural network. This method is a two-stage network including content reconstruction and texture detail restoration. After holding the visually believable fuzzy texture, we further restore the finer details to produce a smoother, clearer, and more coherent inpainting result. Then we propose a special application scene of image inpainting, that is, to delete the redundant pedestrians in the image and ensure the reality of background restoration. It involves pedestrian detection, identifying redundant pedestrians and filling in them with the seemingly correct content. To improve the accuracy of image inpainting in the application scene, we proposed a new mask dataset, which collected the characters in COCO dataset as a mask. Finally, we evaluated our method on COCO and VOC dataset. the experimental results show that our method can produce clearer and more coherent inpainting results, especially for high-resolution images, and the proposed mask dataset can produce better inpainting results in the special application scene.


Algorithms ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 236
Author(s):  
Haoran Xu ◽  
Xinya Li ◽  
Kaiyi Zhang ◽  
Yanbai He ◽  
Haoran Fan ◽  
...  

Recently, deep learning has enabled a huge leap forward in image inpainting. However, due to the memory and computational limitation, most existing methods are able to handle only low-resolution inputs, typically less than 1 K. With the improvement of Internet transmission capacity and mobile device cameras, the resolution of image and video sources available to users via the cloud or locally is increasing. For high-resolution images, the common inpainting methods simply upsample the inpainted result of the shrinked image to yield a blurry result. In recent years, there is an urgent need to reconstruct the missing high-frequency information in high-resolution images and generate sharp texture details. Hence, we propose a general deep learning framework for high-resolution image inpainting, which first hallucinates a semantically continuous blurred result using low-resolution inpainting and suppresses computational overhead. Then the sharp high-frequency details with original resolution are reconstructed using super-resolution refinement. Experimentally, our method achieves inspiring inpainting quality on 2K and 4K resolution images, ahead of the state-of-the-art high-resolution inpainting technique. This framework is expected to be popularized for high-resolution image editing tasks on personal computers and mobile devices in the future.


Author(s):  
Robert M. Glaeser

It is well known that a large flux of electrons must pass through a specimen in order to obtain a high resolution image while a smaller particle flux is satisfactory for a low resolution image. The minimum particle flux that is required depends upon the contrast in the image and the signal-to-noise (S/N) ratio at which the data are considered acceptable. For a given S/N associated with statistical fluxtuations, the relationship between contrast and “counting statistics” is s131_eqn1, where C = contrast; r2 is the area of a picture element corresponding to the resolution, r; N is the number of electrons incident per unit area of the specimen; f is the fraction of electrons that contribute to formation of the image, relative to the total number of electrons incident upon the object.


Sign in / Sign up

Export Citation Format

Share Document