spatially adaptive
Recently Published Documents


TOTAL DOCUMENTS

442
(FIVE YEARS 76)

H-INDEX

37
(FIVE YEARS 4)

Author(s):  
David Haynes ◽  
Kelly D. Hughes ◽  
Austin Rau ◽  
Anne M. Joseph

2021 ◽  
Vol 2021 (29) ◽  
pp. 323-327
Author(s):  
Ali Alsam ◽  
Hans Jakob Rivertz

A fast, spatially adaptive filter for smoothing colour images while preserving edges is proposed. To preserve the edges, we use a constraint that prohibits the increasing of the gradients in the process of diffusion. This constraint is shown to be very effective in preserving details and flexible in cases where more smoothing is desired. In addition, a filter of exponentially increasing diameter is used to allow averaging non-adjacent pixels, including those separated by strong edges.


2021 ◽  
Vol 13 (19) ◽  
pp. 3984
Author(s):  
Javier Marín ◽  
Sergio Escalera

This work presents Satellite Style and Structure Generative Adversarial Network (SSGAN), a generative model of high resolution satellite imagery to support image segmentation. Based on spatially adaptive denormalization modules (SPADE) that modulate the activations with respect to segmentation map structure, in addition to global descriptor vectors that capture the semantic information in a vector with respect to Open Street Maps (OSM) classes, this model is able to produce consistent aerial imagery. By decoupling the generation of aerial images into a structure map and a carefully defined style vector, we were able to improve the realism and geodiversity of the synthesis with respect to the state-of-the-art baseline. Therefore, the proposed model allows us to control the generation not only with respect to the desired structure, but also with respect to a geographic area.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Kyoungmin Ko ◽  
Hyunmin Gwak ◽  
Nalinh Thoummala ◽  
Hyun Kwon ◽  
SungHwan Kim

In this paper, we propose a robust and reliable face recognition model that incorporates depth information such as data from point clouds and depth maps into RGB image data to avoid false facial verification caused by face spoofing attacks while increasing the model’s performance. The proposed model is driven by the spatially adaptive convolution (SAC) block of SqueezeSegv3; this is the attention block that enables the model to weight features according to their importance of spatial location. We also utilize large-margin loss instead of softmax loss as a supervision signal for the proposed method, to enforce high discriminatory power. In the experiment, the proposed model, which incorporates depth information, had 99.88% accuracy and an F 1 score of 93.45%, outperforming the baseline models, which used RGB data alone.


Author(s):  
Mingrui Zhu ◽  
Changcheng Liang ◽  
Nannan Wang ◽  
Xiaoyu Wang ◽  
Zhifeng Li ◽  
...  

We present a face photo-sketch synthesis model, which converts a face photo into an artistic face sketch or recover a photo-realistic facial image from a sketch portrait. Recent progress has been made by convolutional neural networks (CNNs) and generative adversarial networks (GANs), so that promising results can be obtained through real-time end-to-end architectures. However, convolutional architectures tend to focus on local information and neglect long-range spatial dependency, which limits the ability of existing approaches in keeping global structural information. In this paper, we propose a Sketch-Transformer network for face photo-sketch synthesis, which consists of three closely-related modules, including a multi-scale feature and position encoder for patch-level feature and position embedding, a self-attention module for capturing long-range spatial dependency, and a multi-scale spatially-adaptive de-normalization decoder for image reconstruction. Such a design enables the model to generate reasonable detail texture while maintaining global structural information. Extensive experiments show that the proposed method achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.


Author(s):  
Morten Bojsen-Hansen ◽  
Michael Bang Nielsen ◽  
Konstantinos Stamatelos ◽  
Robert Bridson
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Chenglin Zuo ◽  
Jun Ma ◽  
Hao Xiong ◽  
Lin Ran

Digital images captured from CMOS/CCD image sensors are prone to noise due to inherent electronic fluctuations and low photon count. To efficiently reduce the noise in the image, a novel image denoising strategy is proposed, which exploits both nonlocal self-similarity and local shape adaptation. With wavelet thresholding, the residual image in method noise, derived from the initial estimate using nonlocal means (NLM), is exploited further. By incorporating the role of both the initial estimate and the residual image, spatially adaptive patch shapes are defined, and new weights are calculated, which thus results in better denoising performance for NLM. Experimental results demonstrate that our proposed method significantly outperforms original NLM and achieves competitive denoising performance compared with state-of-the-art denoising methods.


Sign in / Sign up

Export Citation Format

Share Document