Evaluation of Intensity and Color Corner Detectors for Affine Invariant Salient Regions

Author(s):  
N. Sebe ◽  
T. Gevers ◽  
S. Dijkstra ◽  
J. van de Weije
Author(s):  
Nicu Sebe ◽  
Theo Gevers ◽  
Joost van de Weijer ◽  
Sietse Dijkstra

2010 ◽  
Vol 30 (6) ◽  
pp. 1619-1621 ◽  
Author(s):  
Guo-ying WANG ◽  
Chun-ying LIANG
Keyword(s):  

2021 ◽  
Vol 13 (2) ◽  
pp. 274
Author(s):  
Guobiao Yao ◽  
Alper Yilmaz ◽  
Li Zhang ◽  
Fei Meng ◽  
Haibin Ai ◽  
...  

The available stereo matching algorithms produce large number of false positive matches or only produce a few true-positives across oblique stereo images with large baseline. This undesired result happens due to the complex perspective deformation and radiometric distortion across the images. To address this problem, we propose a novel affine invariant feature matching algorithm with subpixel accuracy based on an end-to-end convolutional neural network (CNN). In our method, we adopt and modify a Hessian affine network, which we refer to as IHesAffNet, to obtain affine invariant Hessian regions using deep learning framework. To improve the correlation between corresponding features, we introduce an empirical weighted loss function (EWLF) based on the negative samples using K nearest neighbors, and then generate deep learning-based descriptors with high discrimination that is realized with our multiple hard network structure (MTHardNets). Following this step, the conjugate features are produced by using the Euclidean distance ratio as the matching metric, and the accuracy of matches are optimized through the deep learning transform based least square matching (DLT-LSM). Finally, experiments on Large baseline oblique stereo images acquired by ground close-range and unmanned aerial vehicle (UAV) verify the effectiveness of the proposed approach, and comprehensive comparisons demonstrate that our matching algorithm outperforms the state-of-art methods in terms of accuracy, distribution and correct ratio. The main contributions of this article are: (i) our proposed MTHardNets can generate high quality descriptors; and (ii) the IHesAffNet can produce substantial affine invariant corresponding features with reliable transform parameters.


Author(s):  
Jennifer Duncan

AbstractThe Brascamp–Lieb inequalities are a very general class of classical multilinear inequalities, well-known examples of which being Hölder’s inequality, Young’s convolution inequality, and the Loomis–Whitney inequality. Conventionally, a Brascamp–Lieb inequality is defined as a multilinear Lebesgue bound on the product of the pullbacks of a collection of functions $$f_j\in L^{q_j}(\mathbb {R}^{n_j})$$ f j ∈ L q j ( R n j ) , for $$j=1,\ldots ,m$$ j = 1 , … , m , under some corresponding linear maps $$B_j$$ B j . This regime is now fairly well understood (Bennett et al. in Geom Funct Anal 17(5):1343–1415, 2008), and moving forward there has been interest in nonlinear generalisations, where $$B_j$$ B j is now taken to belong to some suitable class of nonlinear maps. While there has been great recent progress on the question of local nonlinear Brascamp–Lieb inequalities (Bennett et al. in Duke Math J 169(17):3291–3338, 2020), there has been relatively little regarding global results; this paper represents some progress along this line of enquiry. We prove a global nonlinear Brascamp–Lieb inequality for ‘quasialgebraic’ maps, a class that encompasses polynomial and rational maps, as a consequence of the multilinear Kakeya-type inequalities of Zhang and Zorin-Kranich. We incorporate a natural affine-invariant weight that both compensates for local degeneracies and yields a constant with minimal dependence on the underlying maps. We then show that this inequality generalises Young’s convolution inequality on algebraic groups with suboptimal constant.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 970
Author(s):  
Miguel Ángel Martínez-Domingo ◽  
Juan Luis Nieves ◽  
Eva M. Valero

Saliency prediction is a very important and challenging task within the computer vision community. Many models exist that try to predict the salient regions on a scene from its RGB image values. Several new models are developed, and spectral imaging techniques may potentially overcome the limitations found when using RGB images. However, the experimental study of such models based on spectral images is difficult because of the lack of available data to work with. This article presents the first eight-channel multispectral image database of outdoor urban scenes together with their gaze data recorded using an eyetracker over several observers performing different visualization tasks. Besides, the information from this database is used to study whether the complexity of the images has an impact on the saliency maps retrieved from the observers. Results show that more complex images do not correlate with higher differences in the saliency maps obtained.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 40027-40037 ◽  
Author(s):  
Yijun Liu ◽  
Zuoteng Xu ◽  
Wujian Ye ◽  
Ziwen Zhang ◽  
Shaowei Weng ◽  
...  

2011 ◽  
Vol 325 (1) ◽  
pp. 269-281
Author(s):  
José Joaquín Bernal ◽  
Ángel del Río ◽  
Juan Jacobo Simón
Keyword(s):  

2012 ◽  
Vol 116 (4) ◽  
pp. 524-537 ◽  
Author(s):  
Luis Ferraz ◽  
Xavier Binefa
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document