image editing
Recently Published Documents


TOTAL DOCUMENTS

275
(FIVE YEARS 80)

H-INDEX

19
(FIVE YEARS 3)

Author(s):  
Xinrong Zhang ◽  
Yanghao Li ◽  
Yuxing Han ◽  
Jiangtao Wen

Video editing is a high-required job, for it requires skilled artists or workers equipped with plentiful physical strength and multidisciplinary knowledge, such as cinematography, aesthetics. Thus gradually, more and more researches focus on proposing semi-automatical and even fully automatical solutions to reduce workloads. Since those conventional methods are usually designed to follow some simple guidelines, they lack flexibility and capability to learn complex ones. Fortunately, the advances of computer vision and machine learning make up the shortages of traditional approaches and make AI editing feasible. There is no survey to conclude those emerging researches yet. This paper summaries the development history of automatic video editing, and especially the applications of AI in partial and full workflows. We emphasizes video editing and discuss related works from multiple aspects: modality, type of input videos, methology, optimization, dataset, and evaluation metric. Besides, we also summarize the progresses in image editing domain, i.e., style transferring, retargeting, and colorization, and seek for the possibility to transfer those techniques to video domain. Finally, we give a brief conclusion about this survey and explore some open problems.


Author(s):  
Dawa Chyophel Lepcha ◽  
Bhawna Goyal ◽  
Ayush Dogra

In the era of rapid growth of technologies, image matting plays a key role in image and video editing along with image composition. In many significant real-world applications such as film production, it has been widely used for visual effects, virtual zoom, image translation, image editing and video editing. With recent advancements in digital cameras, both professionals and consumers have become increasingly involved in matting techniques to facilitate image editing activities. Image matting plays an important role to estimate alpha matte in the unknown region to distinguish foreground from the background region of an image using an input image and the corresponding trimap of an image which represents a foreground and unknown region. Numerous image matting techniques have been proposed recently to extract high-quality matte from image and video sequences. This paper illustrates a systematic overview of the current image and video matting techniques mostly emphasis on the current and advanced algorithms proposed recently. In general, image matting techniques have been categorized according to their underlying approaches, namely, sampling-based, propagation-based, combination of sampling and propagation-based and deep learning-based algorithms. The traditional image matting algorithms depend primarily on color information to predict alpha matte such as sampling-based, propagation-based or combination of sampling and propagation-based algorithms. However, these techniques mostly use low-level features and suffer from high-level background which tends to produce unwanted artifacts when color is same or semi-transparent in the foreground object. Image matting techniques based on deep learning have recently introduced to address the shortcomings of traditional algorithms. Rather than simply depending on the color information, it uses deep learning mechanism to estimate the alpha matte using an input image and the trimap of an image. A comprehensive survey on recent image matting algorithms and in-depth comparative analysis of these algorithms has been thoroughly discussed in this paper.


2021 ◽  
pp. 016555152110500
Author(s):  
Tanzila Saba ◽  
Amjad Rehman ◽  
Tariq Sadad ◽  
Zahid Mehmood

Image tempering is one of the significant issues in the modern era. The use of powerful tools for image editing with advanced technology and its widespread on social media raised questions on data integrity. Currently, the protection of images is uncertain and a severe concern, mainly when it transfers over the Internet. Thus, it is essential to detect an anomaly in images through artificial intelligence techniques. The simple way of image forgery is called copy-move, where a part of an image is replicated in the same image to hide unwanted content of the image. However, image processing through handcrafted features usually looks for pattern concerns with duplicate content, limiting their employment for huge data classification. On the other side, deep learning approaches achieve promising results, but their performance depends on training data with fine-tuning of hyperparameters. Thus, we proposed a custom convolutional neural network (CNN) architecture with a pre-trained model ResNet101 through a transfer learning approach. For this purpose, both models are trained on five different datasets. In both cases, the impact of the model is evaluated through accuracy, precision, recall, F-score and achieved the highest 98.4% accuracy using the Coverage dataset.


2021 ◽  
Vol 10 (6) ◽  
pp. 3147-3155
Author(s):  
Vikas Srivastava ◽  
Sanjay Kumar Yadav

Sharing information through images is a trend nowadays. Advancements in the technology and user-friendly image editing tool make easy to edit the image and spread fake news through different social networking platforms. Forged image has been generated through an advanced image editing tool, so it is very challenging for image forensics to detect the micro discrepancy which distorted the micro pattern. This paper proposes an image forensic detection technique, which implies multi-level discrete wavelet transform to implement digital image filtering. Canny edge detection technique is implemented to detect the edge of the image to implement Otsu’s based enhanced local ternary pattern (OELTP), which can detect forgery-related artifact. DWT is implemented over Cb and Cr components of the image and using edge texture to improve the Otsu global threshold, which is used to extract features using ELTP technique. Support vector machine (SVM) is used for classification to find the image is forged or not. The performance of the work evaluated on three different open available data sets CASIA v1, CASIA v2, and Columbia. Our proposed work gives better results with some of the previous states of the work in terms of detection accuracy.


2021 ◽  
Author(s):  
Cleverson Rodrigues ◽  
Grace Queiroz David ◽  
André Rodrigues dos Reis

Abstract Science is based on evidence that can be measured or observed through methodical techniques which are expressed in several ways, either quantitatively or qualitatively. The technical photograph becomes one of the most important key tools to the result’s disclosure. In the microbiological research, several pieces of evidence can be indicated with variables that are deeply related to the means of culture; pH and color variation, halo formation, overlay of structures, culture shape, among others. The employment of technical photographs, as a strategy of the experimental observation and reliable representation, is indispensable. The protocol presented here suggests the production of the photographic support in microbiological tests runs on Petri dishes, taken by a smartphone to obtain high-quality images, besides showing tools to edit images through PowerPoint. The support is composed of a paper tube with a transparent border, whose reduced light penetration avoids problems, such as the luminous reflection over the Petri dishes or the environment itself. The edition consists of the photograph variation, and in clipping and pasting on uniform backgrounds to provide further detailing. The protocol allowed a standardized photograph collection in high quality, which is ideal for a comparative portrait of microbiological behaviors. The image editing enabled a framework and greater visibility of physical and biological structures in the exhibition of photographs inside the manuscript, such as the removal of noises, background alterations, deformities or irregularities. This protocol is a tool that helps the researcher on the knowledge-obtaining process, and it is applied to different experiments or adapted into the most variable research subjects.


Author(s):  
Jinwei Wang ◽  
Wei Huang ◽  
Xiangyang Luo ◽  
Yun-Qing Shi ◽  
Sunil Kr. Jha

Due to the popularity of JPEG format images in recent years, JPEG images will inevitably involve image editing operation. Thus, some tramped images will leave tracks of Non-aligned double JPEG ( NA-DJPEG ) compression. By detecting the presence of NA-DJPEG compression, one can verify whether a given JPEG image has been tampered with. However, only few methods can identify NA-DJPEG compressed images in the case that the primary quality factor is greater than the secondary quality factor. To address this challenging task, this article proposes a novel feature extraction scheme based optimized pixel difference ( OPD ), which is a new measure for blocking artifacts. Firstly, three color channels (RGB) of a reconstructed image generated by decompressing a given JPEG color image are mapped into spherical coordinates to calculate amplitude and two angles (azimuth and zenith). Then, 16 histograms of OPD along the horizontal and vertical directions are calculated in the amplitude and two angles, respectively. Finally, a set of features formed by arranging the bin values of these histograms is used for binary classification. Experiments demonstrate the effectiveness of the proposed method, and the results show that it significantly outperforms the existing typical methods in the mentioned task.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Shan Liu ◽  
Yun Bo ◽  
Lingling Huang

With the further development of the social economy, people pay more attention to spiritual and cultural needs. As the main place of people’s daily life, the family is very important to the creation of its cultural atmosphere. In fact, China has fully entered the era of interior decoration, and people are paying more and more attention to decorative effects and the comfort and individual characteristics of decoration. Therefore, it is of practical significance to develop the application of decorative art in interior space design. However, the transfer effect of current interior decoration art design tends to be artistic, which leads to image distortion, and image content transfer errors are easy to occur in the process of transfer. The application of image style transfer in interior decoration art can effectively solve such problems. This paper analyzes the basic theory of image style transfer through image style transfer technology, Gram matrix, and Poisson image editing technology and designs images from several aspects such as image segmentation, content loss, enhanced style loss, and Poisson image editing constrained image spatial gradient. The application process of style transfer in interior decoration art realizes the application of image style transfer in interior decoration art. The experimental results show that the application of image style transmission in interior decoration art design can effectively avoid the contents of the interior decoration errors and distortions and has a good style transfer effect.


2021 ◽  
Author(s):  
◽  
Evgeny Patrikeev

<p>Good image editing tools that modify colors of specified image regions or deform the depicted objects have always been an important part of graphics editors. Manual approaches to this task are too time-consuming, while fully automatic methods are not robust enough. Thus, the ideal editing method should include a combination of manual and automated components. This thesis shows that radial basis functions provide a suitable “engine” for two common image editing problems, where interactivity requires both reasonable performance and fast training.  There are many freeform image deformation methods to be used, each having advantages and disadvantages. This thesis explores the use of radial basis functions for freeform image deformation and compares it to a standard approach that uses B-spline warping.  Edit propagation is a promising user-guided color editing technique, which, instead of requiring precise selection of the region being edited, accepts color edits as a few brush strokes over an image region and then propagates these edits to the regions with similar appearance. This thesis focuses on an approach to edit propagation, which considers user input as an incomplete set of values of an intended edit function. The approach interpolates between the user input values using radial basis functions to find the edit function for the whole image.  While the existing approach applies the user-specified edits to all the regions with similar colors, this thesis presents an extension that propagates the edits more selectively. In addition to color information of each image point, it also takes the surrounding texture into account and better distinguishes different objects, giving the algorithm more information about the user-specified region and making the edit propagation more precise.</p>


2021 ◽  
Author(s):  
◽  
Evgeny Patrikeev

<p>Good image editing tools that modify colors of specified image regions or deform the depicted objects have always been an important part of graphics editors. Manual approaches to this task are too time-consuming, while fully automatic methods are not robust enough. Thus, the ideal editing method should include a combination of manual and automated components. This thesis shows that radial basis functions provide a suitable “engine” for two common image editing problems, where interactivity requires both reasonable performance and fast training.  There are many freeform image deformation methods to be used, each having advantages and disadvantages. This thesis explores the use of radial basis functions for freeform image deformation and compares it to a standard approach that uses B-spline warping.  Edit propagation is a promising user-guided color editing technique, which, instead of requiring precise selection of the region being edited, accepts color edits as a few brush strokes over an image region and then propagates these edits to the regions with similar appearance. This thesis focuses on an approach to edit propagation, which considers user input as an incomplete set of values of an intended edit function. The approach interpolates between the user input values using radial basis functions to find the edit function for the whole image.  While the existing approach applies the user-specified edits to all the regions with similar colors, this thesis presents an extension that propagates the edits more selectively. In addition to color information of each image point, it also takes the surrounding texture into account and better distinguishes different objects, giving the algorithm more information about the user-specified region and making the edit propagation more precise.</p>


2021 ◽  
Vol 2021 (29) ◽  
pp. 7-12
Author(s):  
Hoang Le ◽  
Taehong Jeong ◽  
Abdelrahman Abdelhamed ◽  
Hyun Joon Shin ◽  
Michael S. Brown

Most cameras still encode images in the small-gamut sRGB color space. The reliance on sRGB is disappointing as modern display hardware and image-editing software are capable of using wider-gamut color spaces. Converting a small-gamut image to a wider-gamut is a challenging problem. Many devices and software use colorimetric strategies that map colors from the small gamut to their equivalent colors in the wider gamut. This colorimetric approach avoids visual changes in the image but leaves much of the target wide-gamut space unused. Noncolorimetric approaches stretch or expand the small-gamut colors to enhance image colors while risking color distortions. We take a unique approach to gamut expansion by treating it as a restoration problem. A key insight used in our approach is that cameras internally encode images in a wide-gamut color space (i.e., ProPhoto) before compressing and clipping the colors to sRGB's smaller gamut. Based on this insight, we use a softwarebased camera ISP to generate a dataset of 5,000 image pairs of images encoded in both sRGB and ProPhoto. This dataset enables us to train a neural network to perform wide-gamut color restoration. Our deep-learning strategy achieves significant improvements over existing solutions and produces color-rich images with few to no visual artifacts.


Sign in / Sign up

Export Citation Format

Share Document