A Novel Image Inpainting Framework Using Regression

2021 ◽  
Vol 21 (3) ◽  
pp. 1-16
Author(s):  
Somanka Maiti ◽  
Ashish Kumar ◽  
Smriti Jain ◽  
Gaurav Bhatnagar

In this article, a blockwise regression-based image inpainting framework is proposed. The core idea is to fill the unknown region in two stages: Extrapolate the edges to the unknown region and then fill the unknown pixels values in each sub-region demarcated by the extended edges. Canny edge detection and linear edge extension are used to respectively identify and extend edges to the unknown region followed by regression within each sub-region to predict the unknown pixel values. Two different regression models based on K-nearest neighbours and support vectors machine are used to predict the unknown pixel values. The proposed framework has the advantage of inpainting without requiring prior training on any image dataset. The extensive experiments on different images with contrasting distortions demonstrate the robustness of the proposed framework and a detailed comparative analysis shows that the proposed technique outperforms existing state-of-the-art image inpainting methods. Finally, the proposed techniques are applied to MRI images suffering from susceptibility artifacts to illustrate the practical usage of the proposed work.

2004 ◽  
Vol 1 (1) ◽  
pp. 131-142
Author(s):  
Ljupčo Todorovski ◽  
Sašo Džeroski ◽  
Peter Ljubič

Both equation discovery and regression methods aim at inducing models of numerical data. While the equation discovery methods are usually evaluated in terms of comprehensibility of the induced model, the emphasis of the regression methods evaluation is on their predictive accuracy. In this paper, we present Ciper, an efficient method for discovery of polynomial equations and empirically evaluate its predictive performance on standard regression tasks. The evaluation shows that polynomials compare favorably to linear and piecewise regression models, induced by the existing state-of-the-art regression methods, in terms of degree of fit and complexity.


Author(s):  
Jie Yang ◽  
Zhiquan Qi ◽  
Yong Shi

This paper develops a multi-task learning framework that attempts to incorporate the image structure knowledge to assist image inpainting, which is not well explored in previous works. The primary idea is to train a shared generator to simultaneously complete the corrupted image and corresponding structures --- edge and gradient, thus implicitly encouraging the generator to exploit relevant structure knowledge while inpainting. In the meantime, we also introduce a structure embedding scheme to explicitly embed the learned structure features into the inpainting process, thus to provide possible preconditions for image completion. Specifically, a novel pyramid structure loss is proposed to supervise structure learning and embedding. Moreover, an attention mechanism is developed to further exploit the recurrent structures and patterns in the image to refine the generated structures and contents. Through multi-task learning, structure embedding besides with attention, our framework takes advantage of the structure knowledge and outperforms several state-of-the-art methods on benchmark datasets quantitatively and qualitatively.


Author(s):  
Wenbin Li ◽  
Lei Wang ◽  
Jing Huo ◽  
Yinghuan Shi ◽  
Yang Gao ◽  
...  

The core idea of metric-based few-shot image classification is to directly measure the relations between query images and support classes to learn transferable feature embeddings. Previous work mainly focuses on image-level feature representations, which actually cannot effectively estimate a class's distribution due to the scarcity of samples. Some recent work shows that local descriptor based representations can achieve richer representations than image-level based representations. However, such works are still based on a less effective instance-level metric, especially a symmetric metric, to measure the relation between a query image and a support class. Given the natural asymmetric relation between a query image and a support class, we argue that an asymmetric measure is more suitable for metric-based few-shot learning. To that end, we propose a novel Asymmetric Distribution Measure (ADM) network for few-shot learning by calculating a joint local and global asymmetric measure between two multivariate local distributions of a query and a class. Moreover, a task-aware Contrastive Measure Strategy (CMS) is proposed to further enhance the measure function. On popular miniImageNet and tieredImageNet, ADM can achieve the state-of-the-art results, validating our innovative design of asymmetric distribution measures for few-shot learning. The source code can be downloaded from https://github.com/WenbinLee/ADM.git.


2020 ◽  
Vol 34 (07) ◽  
pp. 12605-12612 ◽  
Author(s):  
Jie Yang ◽  
Zhiquan Qi ◽  
Yong Shi

This paper develops a multi-task learning framework that attempts to incorporate the image structure knowledge to assist image inpainting, which is not well explored in previous works. The primary idea is to train a shared generator to simultaneously complete the corrupted image and corresponding structures — edge and gradient, thus implicitly encouraging the generator to exploit relevant structure knowledge while inpainting. In the meantime, we also introduce a structure embedding scheme to explicitly embed the learned structure features into the inpainting process, thus to provide possible preconditions for image completion. Specifically, a novel pyramid structure loss is proposed to supervise structure learning and embedding. Moreover, an attention mechanism is developed to further exploit the recurrent structures and patterns in the image to refine the generated structures and contents. Through multi-task learning, structure embedding besides with attention, our framework takes advantage of the structure knowledge and outperforms several state-of-the-art methods on benchmark datasets quantitatively and qualitatively.


2020 ◽  
Vol 13 (9) ◽  
pp. 189 ◽  
Author(s):  
Ahmed Ibrahim ◽  
Rasha Kashef ◽  
Menglu Li ◽  
Esteban Valencia ◽  
Eric Huang

The Bitcoin (BTC) market presents itself as a new unique medium currency, and it is often hailed as the “currency of the future”. Simulating the BTC market in the price discovery process presents a unique set of market mechanics. The supply of BTC is determined by the number of miners and available BTC and by scripting algorithms for blockchain hashing, while both speculators and investors determine demand. One major question then is to understand how BTC is valued and how different factors influence it. In this paper, the BTC market mechanics are broken down using vector autoregression (VAR) and Bayesian vector autoregression (BVAR) prediction models. The models proved to be very useful in simulating past BTC prices using a feature set of exogenous variables. The VAR model allows the analysis of individual factors of influence. This analysis contributes to an in-depth understanding of what drives BTC, and it can be useful to numerous stakeholders. This paper’s primary motivation is to capitalize on market movement and identify the significant price drivers, including stakeholders impacted, effects of time, as well as supply, demand, and other characteristics. The two VAR and BVAR models are compared with some state-of-the-art forecasting models over two time periods. Experimental results show that the vector-autoregression-based models achieved better performance compared to the traditional autoregression models and the Bayesian regression models.


2020 ◽  
Vol 34 (04) ◽  
pp. 5700-5708 ◽  
Author(s):  
Jianghao Shen ◽  
Yue Wang ◽  
Pengfei Xu ◽  
Yonggan Fu ◽  
Zhangyang Wang ◽  
...  

While increasingly deep networks are still in general desired for achieving state-of-the-art performance, for many specific inputs a simpler network might already suffice. Existing works exploited this observation by learning to skip convolutional layers in an input-dependent manner. However, we argue their binary decision scheme, i.e., either fully executing or completely bypassing one layer for a specific input, can be enhanced by introducing finer-grained, “softer” decisions. We therefore propose a Dynamic Fractional Skipping (DFS) framework. The core idea of DFS is to hypothesize layer-wise quantization (to different bitwidths) as intermediate “soft” choices to be made between fully utilizing and skipping a layer. For each input, DFS dynamically assigns a bitwidth to both weights and activations of each layer, where fully executing and skipping could be viewed as two “extremes” (i.e., full bitwidth and zero bitwidth). In this way, DFS can “fractionally” exploit a layer's expressive power during input-adaptive inference, enabling finer-grained accuracy-computational cost trade-offs. It presents a unified view to link input-adaptive layer skipping and input-adaptive hybrid quantization. Extensive experimental results demonstrate the superior tradeoff between computational cost and model expressive power (accuracy) achieved by DFS. More visualizations also indicate a smooth and consistent transition in the DFS behaviors, especially the learned choices between layer skipping and different quantizations when the total computational budgets vary, validating our hypothesis that layer quantization could be viewed as intermediate variants of layer skipping. Our source code and supplementary material are available at https://github.com/Torment123/DFS.


2020 ◽  
Author(s):  
Hylke Beck ◽  
Seth Westra ◽  
Eric Wood

<p>We introduce a unique set of global observation-based climatologies of daily precipitation (<em>P</em>) occurrence (related to the lower tail of the <em>P</em> distribution) and peak intensity (related to the upper tail of the <em>P</em> distribution). The climatologies were produced using Random Forest (RF) regression models trained with an unprecedented collection of daily <em>P</em> observations from 93,138 stations worldwide. Five-fold cross-validation was used to evaluate the generalizability of the approach and to quantify uncertainty globally. The RF models were found to provide highly satisfactory performance, yielding cross-validation coefficient of determination (<em>R</em><sup>2</sup>) values from 0.74 for the 15-year return-period daily <em>P</em> intensity to 0.86 for the >0.5 mm d<sup>-1</sup> daily <em>P</em> occurrence. The performance of the RF models was consistently superior to that of state-of-the-art reanalysis (ERA5) and satellite (IMERG) products. The highest <em>P</em> intensities over land were found along the western equatorial coast of Africa, in India, and along coastal areas of Southeast Asia. Using a 0.5 mm d<sup>-1</sup> threshold, <em>P</em> was estimated to occur 23.2 % of days on average over the global land surface (excluding Antarctica). The climatologies including uncertainty estimates will be released as the Precipitation DISTribution (PDIST) dataset via www.gloh2o.org/pdist. We expect the dataset to be useful for numerous purposes, such as the evaluation of climate models, the bias correction of gridded <em>P</em> datasets, and the design of hydraulic structures in poorly gauged regions.</p>


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Fathia H. A. Salem ◽  
Khaled S. Mohamed ◽  
Sundes B. K. Mohamed ◽  
Amal A. El Gehani

The state of the art in the technology of prosthetic hands is moving rapidly forward. However, there are only two types of prosthetic hands available in Libya: the Passive Hand and the Mechanical Hand. It is very important, therefore, to develop the prosthesis existing in Libya so that the use of the prosthesis is as practical as possible. Considering the case of amputation below the elbow, with two movements: opening and closing the hand, this work discusses two stages: developing the operation of the body-powered prosthetic hand by controlling it via the surface electromyography signal (sEMG) through dsPIC30f4013 processor and a servo motor and a software based on fuzzy logic concept to detect and process the EMG signal of the patient as well as using it to train the patient how to control the movements without having to fit the prosthetic arm. The proposed system has been practically implemented, tested, and gave satisfied results, especially that the used processor provides fast processing with high performance compared to other types of microcontrollers.


2013 ◽  
Vol 39 (4) ◽  
pp. 885-916 ◽  
Author(s):  
Heeyoung Lee ◽  
Angel Chang ◽  
Yves Peirsman ◽  
Nathanael Chambers ◽  
Mihai Surdeanu ◽  
...  

We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.


2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Lei He ◽  
Yan Xing ◽  
Kangxiong Xia ◽  
Jieqing Tan

In view of the drawback of most image inpainting algorithms by which texture was not prominent, an adaptive inpainting algorithm based on continued fractions was proposed in this paper. In order to restore every damaged point, the information of known pixel points around the damaged point was used to interpolate the intensity of the damaged point. The proposed method included two steps; firstly, Thiele’s rational interpolation combined with the mask image was used to interpolate adaptively the intensities of damaged points to get an initial repaired image, and then Newton-Thiele’s rational interpolation was used to refine the initial repaired image to get a final result. In order to show the superiority of the proposed algorithm, plenty of experiments were tested on damaged images. Subjective evaluation and objective evaluation were used to evaluate the quality of repaired images, and the objective evaluation was comparison of Peak Signal to Noise Ratios (PSNRs). The experimental results showed that the proposed algorithm had better visual effect and higher Peak Signal to Noise Ratio compared with the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document