object removal
Recently Published Documents


TOTAL DOCUMENTS

116
(FIVE YEARS 41)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Feng Huang ◽  
Donghui Shen ◽  
Weisong Wen ◽  
Jiachen Zhang ◽  
Li-Ta Hsu

2021 ◽  
Vol 13 (15) ◽  
pp. 2864
Author(s):  
Shitong Du ◽  
Yifan Li ◽  
Xuyou Li ◽  
Menghao Wu

Simultaneous Localization and Mapping (SLAM) in an unknown environment is a crucial part for intelligent mobile robots to achieve high-level navigation and interaction tasks. As one of the typical LiDAR-based SLAM algorithms, the Lidar Odometry and Mapping in Real-time (LOAM) algorithm has shown impressive results. However, LOAM only uses low-level geometric features without considering semantic information. Moreover, the lack of a dynamic object removal strategy limits the algorithm to obtain higher accuracy. To this end, this paper extends the LOAM pipeline by integrating semantic information into the original framework. Specifically, we first propose a two-step dynamic objects filtering strategy. Point-wise semantic labels are then used to improve feature extraction and searching for corresponding points. We evaluate the performance of the proposed method in many challenging scenarios, including highway, country and urban from the KITTI dataset. The results demonstrate that the proposed SLAM system outperforms the state-of-the-art SLAM methods in terms of accuracy and robustness.


Author(s):  
Vu Tuan Hai ◽  
Dang Thanh Vu ◽  
Huynh Ho Thi Mong Trinh ◽  
Pham The Bao

Recent advances in deep learning models have shown promising potential in object removal, which is the task of replacing undesired objects with appropriate pixel values using known context. Object removal-based deep learning can commonly be solved by modeling it as the Img2Img (image to image) translation or Inpainting. Instead of dealing with a large context, this paper aims at a specific application of object removal, that is, erasing braces trace out of an image having teeth with braces (called braces2teeth problem). We solved the problem by three methods corresponding to different datasets. Firstly, we use the CycleGAN model to deal with the problem that paired training data is not available. In the second case, we try to create pseudo-paired data to train the Pix2Pix model. In the last case, we utilize GraphCut combining generative inpainting model to build a user-interactive tool that can improve the result in case the user is not satisfied with previous results. To our best knowledge, this study is one of the first attempts to take the braces2teeth problem into account by using deep learning techniques and it can be applied in various fields, from health care to entertainment.


2021 ◽  
Vol 18 (2) ◽  
pp. 172988142199654
Author(s):  
Joohyung Kim ◽  
Janghun Hyeon ◽  
Nakju Doh

As interest in image-based rendering increases, the need for multiview inpainting is emerging. Despite of rapid progresses in single-image inpainting based on deep learning approaches, they have no constraint in obtaining color consistency over multiple inpainted images. We target object removal in large-scale indoor spaces and propose a novel pipeline of multiview inpainting to achieve color consistency and boundary consistency in multiple images. The first step of the pipeline is to create color prior information on masks by coloring point clouds from multiple images and projecting the colored point clouds onto the image planes. Next, a generative inpainting network accepts a masked image, a color prior image, imperfect guideline, and two different masks as inputs and yields the refined guideline and inpainted image as outputs. The color prior and guideline input ensure color and boundary consistencies across multiple images. We validate our pipeline on real indoor data sets quantitatively using consistency distance and similarity distance, metrics we defined for comparing results of multiview inpainting and qualitatively.


Author(s):  
Sameh Zarif ◽  
Mina Ibrahim

Reconstructing and repairing of corrupted or missing parts after object removal in digital video is an important trend in artwork restoration. Video inpainting is an active subject in video processing, which deals with the recovery of the corrupted or missing data. Most previous video inpainting approaches consume more time in extensive search to find the best patch to restore the damaged frames. In addition to that, most of them cannot handle the gradual and sudden illumination changes, dynamic background, full object occlusion, and object changes in scale. In this paper, we present a complete video inpainting framework without the extensive search process. The proposed framework consists of a segmentation stage based on low-resolution version and background subtraction. A background inpainting stage is applied to restore the damaged background regions after static or moving object removal based on the gray-level co-occurrence matrix (GLCM). A foreground inpainting stage is based on objects repository. GLCM is used to complete the moving occluded objects during the occlusion. The proposed method reduces the inpainting time from hours to a few seconds and maintains the spatial and temporal consistency. It works well when the background has clutter or fake motion, and it can handle the changes in object size and in posture. Moreover, it is able to handle full occlusion and illumination changes.


Sign in / Sign up

Export Citation Format

Share Document