State of the art in high density image matching

2014 ◽  
Vol 29 (146) ◽  
pp. 144-166 ◽  
Author(s):  
Fabio Remondino ◽  
Maria Grazia Spera ◽  
Erica Nocerino ◽  
Fabio Menna ◽  
Francesco Nex
Author(s):  
N. Haala ◽  
S. Cavegn

Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project “Benchmark on High Density Aerial Image Matching”, which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.


Author(s):  
N. Haala ◽  
S. Cavegn

Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project “Benchmark on High Density Aerial Image Matching”, which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.


Author(s):  
S. Cavegn ◽  
N. Haala ◽  
S. Nebiker ◽  
M. Rothermel ◽  
P. Tutzauer

Both, improvements in camera technology and new pixel-wise matching approaches triggered the further development of software tools for image based 3D reconstruction. Meanwhile research groups as well as commercial vendors provide photogrammetric software to generate dense, reliable and accurate 3D point clouds and Digital Surface Models (DSM) from highly overlapping aerial images. In order to evaluate the potential of these algorithms in view of the ongoing software developments, a suitable test bed is provided by the ISPRS/EuroSDR initiative <i>Benchmark on High Density Image Matching for DSM Computation</i>. This paper discusses the proposed test scenario to investigate the potential of dense matching approaches for 3D data capture from oblique airborne imagery. For this purpose, an oblique aerial image block captured at a GSD of 6 cm in the west of Zürich by a Leica RCD30 Oblique Penta camera is used. Within this paper, the potential test scenario is demonstrated using matching results from two software packages, Agisoft PhotoScan and SURE from University of Stuttgart. As oblique images are frequently used for data capture at building facades, 3D point clouds are mainly investigated at such areas. Reference data from terrestrial laser scanning is used to evaluate data quality from dense image matching for several facade patches with respect to accuracy, density and reliability.


Author(s):  
Y. Q. Dong ◽  
L. Zhang ◽  
X. M. Cui ◽  
H. B. Ai

Although many filter algorithms have been presented over past decades, these algorithms are usually designed for the Lidar point clouds and can’t separate the ground points from the DIM (dense image matching, DIM) point clouds derived from the oblique aerial images owing to the high density and variation of the DIM point clouds completely. To solve this problem, a new automatic filter algorithm is developed on the basis of adaptive TIN models. At first, the differences between Lidar and DIM point clouds which influence the filtering results are analysed in this paper. To avoid the influences of the plants which can’t be penetrated by the DIM point clouds in the searching seed pointes process, the algorithm makes use of the facades of buildings to get ground points located on the roads as seed points and construct the initial TIN. Then a new densification strategy is applied to deal with the problem that the densification thresholds do not change as described in other methods in each iterative process. Finally, we use the DIM point clouds located in Potsdam produced by Photo-Scan to evaluate the method proposed in this paper. The experiment results show that the method proposed in this paper can not only separate the ground points from the DIM point clouds completely but also obtain the better filter results compared with TerraSolid. 1.


2018 ◽  
Vol 120 (6) ◽  
pp. 3155-3171 ◽  
Author(s):  
Roland Diggelmann ◽  
Michele Fiscella ◽  
Andreas Hierlemann ◽  
Felix Franke

High-density microelectrode arrays can be used to record extracellular action potentials from hundreds to thousands of neurons simultaneously. Efficient spike sorters must be developed to cope with such large data volumes. Most existing spike sorting methods for single electrodes or small multielectrodes, however, suffer from the “curse of dimensionality” and cannot be directly applied to recordings with hundreds of electrodes. This holds particularly true for the standard reference spike sorting algorithm, principal component analysis-based feature extraction, followed by k-means or expectation maximization clustering, against which most spike sorters are evaluated. We present a spike sorting algorithm that circumvents the dimensionality problem by sorting local groups of electrodes independently with classical spike sorting approaches. It is scalable to any number of recording electrodes and well suited for parallel computing. The combination of data prewhitening before the principal component analysis-based extraction and a parameter-free clustering algorithm obviated the need for parameter adjustments. We evaluated its performance using surrogate data in which we systematically varied spike amplitudes and spike rates and that were generated by inserting template spikes into the voltage traces of real recordings. In a direct comparison, our algorithm could compete with existing state-of-the-art spike sorters in terms of sensitivity and precision, while parameter adjustment or manual cluster curation was not required. NEW & NOTEWORTHY We present an automatic spike sorting algorithm that combines three strategies to scale classical spike sorting techniques for high-density microelectrode arrays: 1) splitting the recording electrodes into small groups and sorting them independently; 2) clustering a subset of spikes and classifying the rest to limit computation time; and 3) prewhitening the spike waveforms to enable the use of parameter-free clustering. Finally, we combined these strategies into an automatic spike sorter that is competitive with state-of-the-art spike sorters.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Hui Zeng ◽  
Xiuqing Wang ◽  
Yu Gu

This paper presents an effective local image region description method, called CS-LMP (Center Symmetric Local Multilevel Pattern) descriptor, and its application in image matching. The CS-LMP operator has no exponential computations, so the CS-LMP descriptor can encode the differences of the local intensity values using multiply quantization levels without increasing the dimension of the descriptor. Compared with the binary/ternary pattern based descriptors, the CS-LMP descriptor has better descriptive ability and computational efficiency. Extensive image matching experimental results testified the effectiveness of the proposed CS-LMP descriptor compared with other existing state-of-the-art descriptors.


2013 ◽  
Vol 39 (4) ◽  
pp. 917-947 ◽  
Author(s):  
Alberto Barrón-Cedeño ◽  
Marta Vila ◽  
M. Martí ◽  
Paolo Rosso

Although paraphrasing is the linguistic mechanism underlying many plagiarism cases, little attention has been paid to its analysis in the framework of automatic plagiarism detection. Therefore, state-of-the-art plagiarism detectors find it difficult to detect cases of paraphrase plagiarism. In this article, we analyze the relationship between paraphrasing and plagiarism, paying special attention to which paraphrase phenomena underlie acts of plagiarism and which of them are detected by plagiarism detection systems. With this aim in mind, we created the P4P corpus, a new resource that uses a paraphrase typology to annotate a subset of the PAN-PC-10 corpus for automatic plagiarism detection. The results of the Second International Competition on Plagiarism Detection were analyzed in the light of this annotation. The presented experiments show that (i) more complex paraphrase phenomena and a high density of paraphrase mechanisms make plagiarism detection more difficult, (ii) lexical substitutions are the paraphrase mechanisms used the most when plagiarizing, and (iii) paraphrase mechanisms tend to shorten the plagiarized text. For the first time, the paraphrase mechanisms behind plagiarism have been analyzed, providing critical insights for the improvement of automatic plagiarism detection systems.


2015 ◽  
Vol 15 (3) ◽  
pp. 104-113
Author(s):  
Yingying Li ◽  
Jieqing Tan ◽  
Jinqin Zhong

Abstract The local descriptors based on a binary pattern feature have state-of-the-art distinctiveness. However, their high dimensionality resists them from matching faster and being used in a low-end device. In this paper we propose an efficient and feasible learning method to select discriminative binary patterns for constructing a compact local descriptor. In the selection, a searching tree with Branch&Bound is used instead of the exhaustive enumeration, in order to avoid tremendous computation in training. New local descriptors are constructed based on the selected patterns. The efficiency of selecting binary patterns has been confirmed by the evaluation of these new local descriptors’ performance in experiments of image matching and object recognition.


Sign in / Sign up

Export Citation Format

Share Document