Automated Evaluation of Interest Point Detectors

2014 ◽  
Vol 2 (1) ◽  
pp. 86-105 ◽  
Author(s):  
Simon R. Lang ◽  
Martin H. Luerssen ◽  
David M. W. Powers

Interest point detectors are important components in a variety of computer vision systems. This paper demonstrates an automated virtual 3D environment for controlling and measuring detected interest points on 2D images in an accurate and rapid manner. Real-time affine transform tools enable easy implementation and full automation of complex scene evaluations without the time-cost of a manual setup. Nine detectors are tested and compared using evaluation and testing methods based on Schmid, Mohr, and Bauckhage (2000). Each detector is tested on the BSDS500 image set and 34 3D scanned, and manmade models using rotation in the X, Y, and Z axis as well as scale in the X,Y axis. Varying degrees of noise on the models are also tested. Results demonstrate the differing performance and behaviour of each detector across the evaluated transformations, which may assist computer vision practitioners in choosing the right detector for their application.

Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4343
Author(s):  
Franco Hidalgo ◽  
Thomas Bräunl

Modern visual SLAM (vSLAM) algorithms take advantage of computer vision developments in image processing and in interest point detectors to create maps and trajectories from camera images. Different feature detectors and extractors have been evaluated for this purpose in air and ground environments, but not extensively for underwater scenarios. In this paper (I) we characterize underwater images where light and suspended particles alter considerably the images captured, (II) evaluate the performance of common interest points detectors and descriptors in a variety of underwater scenes and conditions towards vSLAM in terms of the number of features matched in subsequent video frames, the precision of the descriptors and the processing time. This research justifies the usage of feature detectors in vSLAM for underwater scenarios and present its challenges and limitations.


2014 ◽  
Vol 926-930 ◽  
pp. 3451-3454
Author(s):  
Li Juan Wang ◽  
Chang Sheng Zhang

A new algorithm is proposed for interest point detection based on monogenic signal theory in this paper. The detection of stable and informative image points is one of the most important problems in modern computer vision. Phase congruency is a dimensionless measure that remains invariant to changes in image illumination and contrast. A monogenic phase congruency function is constructed using the characteristics to detect interest points in image. The experiment results indicate that different kinds of interest points can be detected and located with good precision, thus the proposed method can be applied over wide classes of images.


2013 ◽  
Vol 13 (6) ◽  
pp. 329-338 ◽  
Author(s):  
Martin Zukal ◽  
Radek Beneš ◽  
Petr Číka ◽  
Kamil Říha

Abstract This paper focuses on the comparison of different interest point detectors and their utilization for measurements in ultrasound (US) images. Certain medical examinations are based on speckle tracking which strongly relies on features that can be reliably tracked frame to frame. Only significant features (interest points) resistant to noise and brightness changes within US images are suitable for accurate long-lasting tracking. We compare three interest point detectors - Harris-Laplace, Difference of Gaussian (DoG) and Fast Hessian - and identify the most suitable one for use in US images on the basis of an objective criterion. Repeatability rate is assumed to be an objective quality measure for comparison. We have measured repeatability in images corrupted by different types of noise (speckle noise, Gaussian noise) and for changes in brightness. The Harris-Laplace detector outperformed its competitors and seems to be a sound option when choosing a suitable interest point detector for US images. However, it has to be noted that Fast Hessian and DoG detectors achieved better results in terms of processing speed.


Metrologiya ◽  
2020 ◽  
pp. 15-37
Author(s):  
L. P. Bass ◽  
Yu. A. Plastinin ◽  
I. Yu. Skryabysheva

Use of the technical (computer) vision systems for Earth remote sensing is considered. An overview of software and hardware used in computer vision systems for processing satellite images is submitted. Algorithmic methods of the data processing with use of the trained neural network are described. Examples of the algorithmic processing of satellite images by means of artificial convolution neural networks are given. Ways of accuracy increase of satellite images recognition are defined. Practical applications of convolution neural networks onboard microsatellites for Earth remote sensing are presented.


2017 ◽  
Vol 2 (1) ◽  
pp. 80-87
Author(s):  
Puyda V. ◽  
◽  
Stoian. A.

Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Pablo E. Layana Castro ◽  
Joan Carles Puchalt ◽  
Antonio-José Sánchez-Salmerón

AbstractOne of the main problems when monitoring Caenorhabditis elegans nematodes (C. elegans) is tracking their poses by automatic computer vision systems. This is a challenge given the marked flexibility that their bodies present and the different poses that can be performed during their behaviour individually, which become even more complicated when worms aggregate with others while moving. This work proposes a simple solution by combining some computer vision techniques to help to determine certain worm poses and to identify each one during aggregation or in coiled shapes. This new method is based on the distance transformation function to obtain better worm skeletons. Experiments were performed with 205 plates, each with 10, 15, 30, 60 or 100 worms, which totals 100,000 worm poses approximately. A comparison of the proposed method was made to a classic skeletonisation method to find that 2196 problematic poses had improved by between 22% and 1% on average in the pose predictions of each worm.


2021 ◽  
Vol 14 (3) ◽  
pp. 1-17
Author(s):  
Elena Villaespesa ◽  
Seth Crider

Computer vision algorithms are increasingly being applied to museum collections to identify patterns, colors, and subjects by generating tags for each object image. There are multiple off-the-shelf systems that offer an accessible and rapid way to undertake this process. Based on the highlights of the Metropolitan Museum of Art's collection, this article examines the similarities and differences between the tags generated by three well-known computer vision systems (Google Cloud Vision, Amazon Rekognition, and IBM Watson). The results provide insights into the characteristics of these taxonomies in terms of the volume of tags generated for each object, their diversity, typology, and accuracy. In consequence, this article discusses the need for museums to define their own subject tagging strategy and selection criteria of computer vision tools based on their type of collection and tags needed to complement their metadata.


Sign in / Sign up

Export Citation Format

Share Document