Facial Range Image Matching Using the ComplexWavelet Structural Similarity Metric

Author(s):  
Shalini Gupta ◽  
Mehul Sampat ◽  
Mia Markey ◽  
Alan Bovik ◽  
Zhou Wang
Author(s):  
G. Mandlburger

In the last years, the tremendous progress in image processing and camera technology has reactivated the interest in photogrammetrybased surface mapping. With the advent of Dense Image Matching (DIM), the derivation of height values on a per-pixel basis became feasible, allowing the derivation of Digital Elevation Models (DEM) with a spatial resolution in the range of the ground sampling distance of the aerial images, which is often below 10 cm today. While mapping topography and vegetation constitutes the primary field of application for image based surface reconstruction, multi-spectral images also allow to see through the water surface to the bottom underneath provided sufficient water clarity. In this contribution, the feasibility of through-water dense image matching for mapping shallow water bathymetry using off-the-shelf software is evaluated. In a case study, the SURE software is applied to three different coastal and inland water bodies. After refraction correction, the DIM point clouds and the DEMs derived thereof are compared to concurrently acquired laser bathymetry data. The results confirm the general suitability of through-water dense image matching, but sufficient bottom texture and favorable environmental conditions (clear water, calm water surface) are a preconditions for achieving accurate results. Water depths of up to 5 m could be mapped with a mean deviation between laser and trough-water DIM in the dm-range. Image based water depth estimates, however, become unreliable in case of turbid or wavy water and poor bottom texture.


Author(s):  
M. Hasheminasab ◽  
H. Ebadi ◽  
A. Sedaghat

In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT) descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus) method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM) algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and capability.


Author(s):  
Yingjing Lu

The Mean Square Error (MSE) has shown its strength when applied in deep generative models such as Auto-Encoders to model reconstruction loss. However, in image domain especially, the limitation of MSE is obvious: it assumes pixel independence and ignores spatial relationships of samples. This contradicts most architectures of Auto-Encoders which use convolutional layers to extract spatial dependent features. We base on the structural similarity metric (SSIM) and propose a novel level weighted structural similarity (LWSSIM) loss for convolutional Auto-Encoders. Experiments on common datasets on various Auto-Encoder variants show that our loss is able to outperform the MSE loss and the Vanilla SSIM loss. We also provide reasons why our model is able to succeed in cases where the standard SSIM loss fails.


2016 ◽  
Vol 10 (4) ◽  
pp. 045007 ◽  
Author(s):  
Jian-hua Guo ◽  
Fan Yang ◽  
Hai Tan ◽  
Jing-xue Wang ◽  
Zhi-heng Liu

Author(s):  
Kagehiro NAGAO ◽  
Takayuki OKATANI ◽  
Koichiro DEGUCHI

Sign in / Sign up

Export Citation Format

Share Document