Research on Efficient Image Inpainting Algorithm Based on Deep Learning

Author(s):  
Tao Qin ◽  
Juanjuan Liu ◽  
Wenchao Xue
Displays ◽  
2021 ◽  
pp. 102028
Author(s):  
Zhen Qin ◽  
Qingliang Zeng ◽  
Yixin Zong ◽  
Fan Xu

2021 ◽  
Author(s):  
Huan Zhang ◽  
Zhao Zhang ◽  
Haijun Zhang ◽  
Yi Yang ◽  
Shuicheng Yan ◽  
...  

<div>Deep learning based image inpainting methods have improved the performance greatly due to powerful representation ability of deep learning. However, current deep inpainting methods still tend to produce unreasonable structure and blurry texture, implying that image inpainting is still a challenging topic due to the ill-posed property of the task. To address these issues, we propose a novel deep multi-resolution learning-based progressive image inpainting method, termed MR-InpaintNet, which takes the damaged images of different resolutions as input and then fuses the multi-resolution features for repairing the damaged images. The idea is motivated by the fact that images of different resolutions can provide different levels of feature information. Specifically, the low-resolution image provides strong semantic information and the high-resolution image offers detailed texture information. The middle-resolution image can be used to reduce the gap between low-resolution and high-resolution images, which can further refine the inpainting result. To fuse and improve the multi-resolution features, a novel multi-resolution feature learning (MRFL) process is designed, which is consisted of a multi-resolution feature fusion (MRFF) module, an adaptive feature enhancement (AFE) module and a memory enhanced mechanism (MEM) module for information preservation. Then, the refined multi-resolution features contain both rich semantic information and detailed texture information from multiple resolutions. We further handle the refined multiresolution features by the decoder to obtain the recovered image. Extensive experiments on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our proposed MRInpaintNet can effectively recover the textures and structures, and performs favorably against state-of-the-art methods.</div>


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 331 ◽  
Author(s):  
Yifeng Xu ◽  
Huigang Wang ◽  
Xing Liu ◽  
Henry He ◽  
Qingyue Gu ◽  
...  

Recent advances in deep learning have shown exciting promise in low-level artificial intelligence tasks such as image classification, speech recognition, object detection, and semantic segmentation, etc. Artificial intelligence has made an important contribution to autopilot, which is a complex high-level intelligence task. However, the real autopilot scene is quite complicated. The first accident of autopilot occurred in 2016. It resulted in a fatal crash where the white side of a vehicle appeared similar to a brightly lit sky. The root of the problem is that the autopilot vision system cannot identify the part of a vehicle when the part is similar to the background. A method called DIDA was first proposed based on the deep learning network to see the hidden part. DIDA cascades the following steps: object detection, scaling, image inpainting assuming a hidden part beside the car, object re-detection from inpainted image, zooming back to the original size, and setting an alarm region by comparing two detected regions. DIDA was tested in a similar scene and achieved exciting results. This method solves the aforementioned problem only by using optical signals. Additionally, the vehicle dataset captured in Xi’an, China can be used in subsequent research.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3204
Author(s):  
S. M. Nadim Uddin ◽  
Yong Ju Jung

Deep-learning-based image inpainting methods have shown significant promise in both rectangular and irregular holes. However, the inpainting of irregular holes presents numerous challenges owing to uncertainties in their shapes and locations. When depending solely on convolutional neural network (CNN) or adversarial supervision, plausible inpainting results cannot be guaranteed because irregular holes need attention-based guidance for retrieving information for content generation. In this paper, we propose two new attention mechanisms, namely a mask pruning-based global attention module and a global and local attention module to obtain global dependency information and the local similarity information among the features for refined results. The proposed method is evaluated using state-of-the-art methods, and the experimental results show that our method outperforms the existing methods in both quantitative and qualitative measures.


Author(s):  
Weiwei Cai ◽  
Zhanguo Wei

The latest methods based on deep learning have achieved amazing results regarding the complex work of inpainting large missing areas in an image. This type of method generally attempts to generate one single &quot;optimal&quot; inpainting result, ignoring many other plausible results. However, considering the uncertainty of the inpainting task, one sole result can hardly be regarded as a desired regeneration of the missing area. In view of this weakness, which is related to the design of the previous algorithms, we propose a novel deep generative model equipped with a brand new style extractor which can extract the style noise (a latent vector) from the ground truth image. Once obtained, the extracted style noise and the ground truth image are both input into the generator. We also craft a consistency loss that guides the generated image to approximate the ground truth. Meanwhile, the same extractor captures the style noise from the generated image, which is forced to approach the input noise according to the consistency loss. After iterations, our generator is able to learn the styles corresponding to multiple sets of noise. The proposed model can generate a (sufficiently large) number of inpainting results consistent with the context semantics of the image. Moreover, we check the effectiveness of our model on three databases, i.e., CelebA, Agricultural Disease, and MauFlex. Compared to state-of-the-art inpainting methods, this model is able to offer desirable inpainting results with both a better quality and higher diversity. The code and model will be made available on https://github.com/vivitsai/SEGAN.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Haoming Zhang ◽  
Yue Qi ◽  
Xiaoting Xue ◽  
Yahui Nan

Chinese ancient stone inscriptions contain Chinese traditional calligraphy culture and art information. However, due to the long history of the ancient stone inscriptions, natural erosion, and poor early protection measures, there are a lot of noise in the existing ancient stone inscriptions, which has adverse effects on reading these stone inscriptions and their aesthetic appreciation. At present, digital technologies have played important roles in the protection of cultural relics. For ancient stone inscriptions, we should obtain more perfect digital results without multiple types of noise, while there are few deep learning methods designed for processing stone inscription images. Therefore, we propose a basic framework for image denoising and inpainting of stone inscriptions based on deep learning methods. Firstly, we collect as many images of stone inscriptions as possible and preprocess these images to establish an inscriptions image dataset for image denoising and inpainting. In addition, an improved GAN with a denoiser is used for generating more virtual stone inscription images to expand the dataset. On the basis of these collected and generated images, we designed a stone inscription image denoising model based on multiscale feature fusion and introduced Charbonnier loss function to improve this image denoising model. To further improve the denoising results, an image inpainting model with the coherent semantic attention mechanism is introduced to recover some effective information removed by the former denoising model as much as possible. The experimental results show that our image denoising model achieves better results on PSNR, SSIM, and CEI. The final results have obvious visual improvement compared with the original stone inscription images.


2021 ◽  
Vol 30 (03) ◽  
Author(s):  
Dezhi Bo ◽  
Ran Ma ◽  
Keke Wang ◽  
Min Su ◽  
Ping An

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Xinyi Wang ◽  
He Wang ◽  
Shaozhang Niu

Image inpainting algorithms have a wide range of applications, which can be used for object removal in digital images. With the development of semantic level image inpainting technology, this brings great challenges to blind image forensics. In this case, many conventional methods have been proposed which have disadvantages such as high time complexity and low robustness to postprocessing operations. Therefore, this paper proposes a mask regional convolutional neural network (Mask R-CNN) approach for patch-based inpainting detection. According to the current research, many deep learning methods have shown the capacity for segmentation tasks when labeled datasets are available, so we apply a deep neural network to the domain of inpainting forensics. This deep learning model can distinguish and obtain different features between the inpainted and noninpainted regions. To reduce the missed detection areas and improve detection accuracy, we also adjust the sizes of the anchor scales due to the inpainting images and replace the original nonmaximum suppression single threshold with an improved nonmaximum suppression (NMS). The experimental results demonstrate this intelligent method has better detection performance over recent approaches of image inpainting forensics.


Sign in / Sign up

Export Citation Format

Share Document