image matting
Recently Published Documents


TOTAL DOCUMENTS

197
(FIVE YEARS 62)

H-INDEX

16
(FIVE YEARS 5)

Author(s):  
Jizhizi Li ◽  
Jing Zhang ◽  
Stephen J. Maybank ◽  
Dacheng Tao
Keyword(s):  

Author(s):  
Dawa Chyophel Lepcha ◽  
Bhawna Goyal ◽  
Ayush Dogra

In the era of rapid growth of technologies, image matting plays a key role in image and video editing along with image composition. In many significant real-world applications such as film production, it has been widely used for visual effects, virtual zoom, image translation, image editing and video editing. With recent advancements in digital cameras, both professionals and consumers have become increasingly involved in matting techniques to facilitate image editing activities. Image matting plays an important role to estimate alpha matte in the unknown region to distinguish foreground from the background region of an image using an input image and the corresponding trimap of an image which represents a foreground and unknown region. Numerous image matting techniques have been proposed recently to extract high-quality matte from image and video sequences. This paper illustrates a systematic overview of the current image and video matting techniques mostly emphasis on the current and advanced algorithms proposed recently. In general, image matting techniques have been categorized according to their underlying approaches, namely, sampling-based, propagation-based, combination of sampling and propagation-based and deep learning-based algorithms. The traditional image matting algorithms depend primarily on color information to predict alpha matte such as sampling-based, propagation-based or combination of sampling and propagation-based algorithms. However, these techniques mostly use low-level features and suffer from high-level background which tends to produce unwanted artifacts when color is same or semi-transparent in the foreground object. Image matting techniques based on deep learning have recently introduced to address the shortcomings of traditional algorithms. Rather than simply depending on the color information, it uses deep learning mechanism to estimate the alpha matte using an input image and the trimap of an image. A comprehensive survey on recent image matting algorithms and in-depth comparative analysis of these algorithms has been thoroughly discussed in this paper.


2021 ◽  
Vol 8 (2) ◽  
pp. 317-328
Author(s):  
Meng-Yao Cui ◽  
Zhe Zhu ◽  
Yulu Yang ◽  
Shao-Ping Lu

AbstractExisting color editing algorithms enable users to edit the colors in an image according to their own aesthetics. Unlike artists who have an accurate grasp of color, ordinary users are inexperienced in color selection and matching, and allowing non-professional users to edit colors arbitrarily may lead to unrealistic editing results. To address this issue, we introduce a palette-based approach for realistic object-level image recoloring. Our data-driven approach consists of an offline learning part that learns the color distributions for different objects in the real world, and an online recoloring part that first recognizes the object category, and then recommends appropriate realistic candidate colors learned in the offline step for that category. We also provide an intuitive user interface for efficient color manipulation. After color selection, image matting is performed to ensure smoothness of the object boundary. Comprehensive evaluation on various color editing examples demonstrates that our approach outperforms existing state-of-the-art color editing algorithms.


2021 ◽  
Author(s):  
Qinglin Liu ◽  
Haozhe Xie ◽  
Shengping Zhang ◽  
Bineng Zhong ◽  
Rongrong Ji

2021 ◽  
Author(s):  
Shuang Liu ◽  
Fujian Feng ◽  
Hongshan Gou ◽  
Zhulian Zhou ◽  
Man Tan ◽  
...  

2021 ◽  
Author(s):  
Jiajian Huang
Keyword(s):  

Author(s):  
Jizhizi Li ◽  
Jing Zhang ◽  
Dacheng Tao

Automatic image matting (AIM) refers to estimating the soft foreground from an arbitrary natural image without any auxiliary input like trimap, which is useful for image editing. Prior methods try to learn semantic features to aid the matting process while being limited to images with salient opaque foregrounds such as humans and animals. In this paper, we investigate the difficulties when extending them to natural images with salient transparent/meticulous foregrounds or non-salient foregrounds. To address the problem, a novel end-to-end matting network is proposed, which can predict a generalized trimap for any image of the above types as a unified semantic representation. Simultaneously, the learned semantic features guide the matting network to focus on the transition areas via an attention mechanism. We also construct a test set AIM-500 that contains 500 diverse natural images covering all types along with manually labeled alpha mattes, making it feasible to benchmark the generalization ability of AIM models. Results of the experiments demonstrate that our network trained on available composite matting datasets outperforms existing methods both objectively and subjectively. The source code and dataset are available at https://github.com/JizhiziLi/AIM.


Sign in / Sign up

Export Citation Format

Share Document