scholarly journals LAPM: a tool for underwater large-area photo-mosaicking

2013 ◽  
Vol 2 (2) ◽  
pp. 189-198 ◽  
Author(s):  
Y. Marcon ◽  
H. Sahling ◽  
G. Bohrmann

Abstract. This paper presents a new tool for large-area photo-mosaicking (LAPM tool). This tool was developed specifically for the purpose of underwater mosaicking, and it is aimed at providing end-user scientists with an easy and robust way to construct large photo-mosaics from any set of images. It is notably capable of constructing mosaics with an unlimited number of images on any modern computer (minimum 1.30 GHz, 2 GB RAM). The mosaicking process can rely on both feature matching and navigation data. This is complemented by an intuitive graphical user interface, which gives the user the ability to select feature matches between any pair of overlapping images. Finally, mosaic files are given geographic attributes that permit direct import into ArcGIS. So far, the LAPM tool has been successfully used to construct geo-referenced photo-mosaics with photo and video material from several scientific cruises. The largest photo-mosaic contained more than 5000 images for a total area of about 105 000 m2. This is the first article to present and to provide a finished and functional program to construct large geo-referenced photo-mosaics of the seafloor using feature detection and matching techniques. It also presents concrete examples of photo-mosaics produced with the LAPM tool.

Author(s):  
Y. Marcon ◽  
H. Sahling ◽  
G. Bohrmann

Abstract. This paper presents a new tool for large-area photo-mosaicking (LAPM tool). This tool was developed specifically for the purpose of underwater mosaicking, and it is aimed at providing end-user scientists with an easy and robust way to construct large photo-mosaics from any set of images. It is notably capable of constructing mosaics with unlimited amount of images and on any recent computer. The mosaicking process can rely on both feature matching and navigation data. This is complemented by an intuitive graphical user interface, which gives the user full control over the feature matches between any pair of overlapping images. Finally, mosaic files are given geographic attributes that allow direct import into ArcGIS. So far, the LAPM tool was successfully used to construct geo-referenced photo-mosaics with photo and video material from several scientific cruises. The largest photo-mosaic contained more than 5000 images for a total area of about 105 000 m2.


Author(s):  
P. Jende ◽  
M. Peter ◽  
M. Gerke ◽  
G. Vosselman

Mobile Mapping’s ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform’s position. Consequently, acquired data products’ positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. <br><br> We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. <br><br> This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images’ geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. <br><br> Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability of putative matches and the utilisation of templates instead of feature descriptors. In our experiments discussed in this paper, typical urban scenes have been used for evaluating the proposed method. Even though no additional outlier removal techniques have been used, our method yields almost 90% of correct correspondences. However, repetitive image patterns may still induce ambiguities which cannot be fully averted by this technique. Hence and besides, possible advancements will be briefly presented.


Author(s):  
A. Abbas ◽  
S. Ghuffar

From the last decade, the feature detection, description and matching techniques are most commonly exploited in various photogrammetric and computer vision applications, which includes: 3D reconstruction of scenes, image stitching for panoramic creation, image classification, or object recognition etc. However, in terrestrial imagery of urban scenes contains various issues, which include duplicate and identical structures (i.e. repeated windows and doors) that cause the problem in feature matching phase and ultimately lead to failure of results specially in case of camera pose and scene structure estimation. In this paper, we will address the issue related to ambiguous feature matching in urban environment due to repeating patterns.


Author(s):  
P. Jende ◽  
M. Peter ◽  
M. Gerke ◽  
G. Vosselman

Mobile Mapping’s ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform’s position. Consequently, acquired data products’ positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. &lt;br&gt;&lt;br&gt; We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. &lt;br&gt;&lt;br&gt; This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images’ geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. &lt;br&gt;&lt;br&gt; Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability of putative matches and the utilisation of templates instead of feature descriptors. In our experiments discussed in this paper, typical urban scenes have been used for evaluating the proposed method. Even though no additional outlier removal techniques have been used, our method yields almost 90% of correct correspondences. However, repetitive image patterns may still induce ambiguities which cannot be fully averted by this technique. Hence and besides, possible advancements will be briefly presented.


2017 ◽  
Vol 2 (1) ◽  
pp. 80-87
Author(s):  
Puyda V. ◽  
◽  
Stoian. A.

Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color


Author(s):  
Suresha .M ◽  
. Sandeep

Local features are of great importance in computer vision. It performs feature detection and feature matching are two important tasks. In this paper concentrates on the problem of recognition of birds using local features. Investigation summarizes the local features SURF, FAST and HARRIS against blurred and illumination images. FAST and Harris corner algorithm have given less accuracy for blurred images. The SURF algorithm gives best result for blurred image because its identify strongest local features and time complexity is less and experimental demonstration shows that SURF algorithm is robust for blurred images and the FAST algorithms is suitable for images with illumination.


2010 ◽  
Vol 9 (4) ◽  
pp. 29-34 ◽  
Author(s):  
Achim Weimert ◽  
Xueting Tan ◽  
Xubo Yang

In this paper, we present a novel feature detection approach designed for mobile devices, showing optimized solutions for both detection and description. It is based on FAST (Features from Accelerated Segment Test) and named 3D FAST. Being robust, scale-invariant and easy to compute, it is a candidate for augmented reality (AR) applications running on low performance platforms. Using simple calculations and machine learning, FAST is a feature detection algorithm known to be efficient but not very robust in addition to its lack of scale information. Our approach relies on gradient images calculated for different scale levels on which a modified9 FAST algorithm operates to obtain the values of the corner response function. We combine the detection with an adapted version of SURF (Speed Up Robust Features) descriptors, providing a system with all means to implement feature matching and object detection. Experimental evaluation on a Symbian OS device using a standard image set and comparison with SURF using Hessian matrix-based detector is included in this paper, showing improvements in speed (compared to SURF) and robustness (compared to FAST)


2021 ◽  
pp. 51-64
Author(s):  
Ahmed A. Elngar ◽  
◽  
◽  
◽  
◽  
...  

Feature detection, description and matching are essential components of various computer vision applications; thus, they have received a considerable attention in the last decades. Several feature detectors and descriptors have been proposed in the literature with a variety of definitions for what kind of points in an image is potentially interesting (i.e., a distinctive attribute). This chapter introduces basic notation and mathematical concepts for detecting and describing image features. Then, it discusses properties of perfect features and gives an overview of various existing detection and description methods. Furthermore, it explains some approaches to feature matching. Finally, the chapter discusses the most used techniques for performance evaluation of detection algorithms.


Author(s):  
MAKOTO NAGAO

Pattern recognition and object detection systems so far developed required the algorithmic description of every detail of the objects to be recognized by bottom-up process from pixel-to-pixel relation to line, corner, and structural description. Because this low-level process does not see global information, feature detection is highly sensitive to noise. To overcome this problem and to give human-like flexibility to machine recognition process, we developed a new system which had non-algorithmic feature detection functions by seeing a comparatively large area at once. It uses a variable size window which is applied to the most plausible parts in an image by a top-down command from an object model, and obtains characteristic features of object parts. This window application is realized mostly in hardware, and has some autonomic ability to detect the best features by a sort of random trial and error search. The system has some other hardware functions such as mutual correlation of one- and two-dimensional patterns, which are also flexible according to the variable size window. The system interprets user's declarative description of objects, and activates the window application functions to obtain characteristic features of the description. This new flexible approach of object detection can be used as a robot eye to recognize many simple two-dimensional shapes.


Sign in / Sign up

Export Citation Format

Share Document