An Exploratory Inspection of the Detection Quality of Pose and Object Detection Systems by Synthetic Data

Author(s):  
Robert Manthey ◽  
Falk Schmidsberger ◽  
Rico Thomanek ◽  
Christian Roschke ◽  
Tony Rolletschke ◽  
...  

2020 ◽  
Vol 15 (90) ◽  
pp. 42-57
Author(s):  
Anna A. Kuznetsova ◽  

Average precision (AP) as the area under the Precision – Recall curve is the de facto standard for comparing the quality of algorithms for classification, information retrieval, object detection, etc. However, traditional Precision – Recall curves usually have a zigzag shape, which makes it difficult to calculate the average precision and to compare algorithms. This paper proposes a statistical approach to the construction of Precision – Recall curves when assessing the quality of algorithms for object detection in images. This approach is based on calculating Statistical Precision and Statistical Recall. Instead of the traditional confidence level, a statistical confidence level is calculated for each image as a percentage of objects detected. For each threshold value of the statistical confidence level, the total number of correctly detected objects (Integral TP) and the total number of background objects mistakenly assigned by the algorithm to one of the classes (Integral FP) are calculated for each image. Next, the values of Precision and Recall are calculated. Precision – Recall statistical curves, unlike traditional curves, are guaranteed to be monotonically non-increasing. At the same time, the Statistical Average Precision of object detection algorithms on small test datasets turns out to be less than the traditional Average Precision. On relatively large test image datasets, these differences are smoothed out. The comparison of the use of conventional and statistical Precision – Recall curves is given on a specific example.



Electronics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 90
Author(s):  
Donghyeon Lee ◽  
Joonyoung Kim ◽  
Kyomin Jung

Fully convolutional structures provide feature maps acquiring local contexts of an image by only stacking numerous convolutional layers. These structures are known to be effective in modern state-of-the-art object detectors such as Faster R-CNN and SSD to find objects from local contexts. However, the quality of object detectors can be further improved by incorporating global contexts when some ambiguous objects should be identified by surrounding objects or background. In this paper, we introduce a self-attention module for object detectors to incorporate global contexts. More specifically, our self-attention module allows the feature extractor to compute feature maps with global contexts by the self-attention mechanism. Our self-attention module computes relationships among all elements in the feature maps, and then blends the feature maps considering the computed relationships. Therefore, this module can capture long-range relationships among objects or backgrounds, which is difficult for fully convolutional structures. Furthermore, our proposed module is not limited to any specific object detectors, and it can be applied to any CNN-based model for any computer vision task. In the experimental results on the object detection task, our method shows remarkable gains in average precision (AP) compared to popular models that have fully convolutional structures. In particular, compared to Faster R-CNN with the ResNet-50 backbone, our module applied to the same backbone achieved +4.0 AP gains without the bells and whistles. In image semantic segmentation and panoptic segmentation tasks, our module improved the performance in all metrics used for each task.



Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.



2021 ◽  
Vol 15 (4) ◽  
pp. 1-20
Author(s):  
Georg Steinbuss ◽  
Klemens Böhm

Benchmarking unsupervised outlier detection is difficult. Outliers are rare, and existing benchmark data contains outliers with various and unknown characteristics. Fully synthetic data usually consists of outliers and regular instances with clear characteristics and thus allows for a more meaningful evaluation of detection methods in principle. Nonetheless, there have only been few attempts to include synthetic data in benchmarks for outlier detection. This might be due to the imprecise notion of outliers or to the difficulty to arrive at a good coverage of different domains with synthetic data. In this work, we propose a generic process for the generation of datasets for such benchmarking. The core idea is to reconstruct regular instances from existing real-world benchmark data while generating outliers so that they exhibit insightful characteristics. We propose and describe a generic process for the benchmarking of unsupervised outlier detection, as sketched so far. We then describe three instantiations of this generic process that generate outliers with specific characteristics, like local outliers. To validate our process, we perform a benchmark with state-of-the-art detection methods and carry out experiments to study the quality of data reconstructed in this way. Next to showcasing the workflow, this confirms the usefulness of our proposed process. In particular, our process yields regular instances close to the ones from real data. Summing up, we propose and validate a new and practical process for the benchmarking of unsupervised outlier detection.





Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1650 ◽  
Author(s):  
Xiaoming Lv ◽  
Fajie Duan ◽  
Jia-Jia Jiang ◽  
Xiao Fu ◽  
Lin Gan

Most of the current object detection approaches deliver competitive results with an assumption that a large number of labeled data are generally available and can be fed into a deep network at once. However, due to expensive labeling efforts, it is difficult to deploy the object detection systems into more complex and challenging real-world environments, especially for defect detection in real industries. In order to reduce the labeling efforts, this study proposes an active learning framework for defect detection. First, an Uncertainty Sampling is proposed to produce the candidate list for annotation. Uncertain images can provide more informative knowledge for the learning process. Then, an Average Margin method is designed to set the sampling scale for each defect category. In addition, an iterative pattern of training and selection is adopted to train an effective detection model. Extensive experiments demonstrate that the proposed method can render the required performance with fewer labeled data.



Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4794
Author(s):  
Alejandro Rodriguez-Ramos ◽  
Adrian Alvarez-Fernandez ◽  
Hriday Bavle ◽  
Pascual Campoy ◽  
Jonathan P. How

Deep- and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous following of a noncooperative multirotor. The complete maneuver was learned with synthetic images and high-dimensional low-level continuous robot states, with deep- and reinforcement-learning techniques for object detection and motion control, respectively. A novel motion-control strategy for object following is introduced where the camera gimbal movement is coupled with the multirotor motion during the multirotor following. Results confirm that our present framework can be used to deploy a vision-based task in real flight using synthetic data. It was extensively validated in both simulated and real-flight scenarios, providing proper results (following a multirotor up to 1.3 m/s in simulation and 0.3 m/s in real flights).



2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Alessandro Galli ◽  
Davide Comite ◽  
Ilaria Catapano ◽  
Gianluca Gennarelli ◽  
Francesco Soldovieri ◽  
...  

Effective diagnostics with ground penetrating radar (GPR) is strongly dependent on the amount and quality of available data as well as on the efficiency of the adopted imaging procedure. In this frame, the aim of the present work is to investigate the capability of a typical GPR system placed at a ground interface to derive three-dimensional (3D) information on the features of buried dielectric targets (location, dimension, and shape). The scatterers can have size comparable to the resolution limits and can be placed in the shallow subsurface in the antenna near field. Referring to canonical multimonostatic configurations, the forward scattering problem is analyzed first, obtaining a variety of synthetic GPR traces and radargrams by means of a customized implementation of an electromagnetic CAD tool. By employing these numerical data, a full 3D frequency-domain microwave tomographic approach, specifically designed for the inversion problem at hand, is applied to tackle the imaging process. The method is tested here by considering various scatterers, with different shapes and dielectric contrasts. The selected tomographic results illustrate the aptitude of the proposed approach to recover the fundamental features of the targets even with critical GPR settings.



Geophysics ◽  
2009 ◽  
Vol 74 (1) ◽  
pp. S1-S10 ◽  
Author(s):  
Mathias Alerini ◽  
Bjørn Ursin

Kirchhoff migration is based on a continuous integral ranging from minus infinity to plus infinity. The necessary discretization and truncation of this integral introduces noise in the migrated image. The attenuation of this noise has been studied by many authors who propose different strategies. The main idea is to limit the migration operator around the specular point. This means that the specular point must be known before migration and that a criterion exists to determine the size of the migration operator. We propose an original approach to estimate the size of the focusing window, knowing the geologic dip. The approach benefits from the use of prestack depth migration in angle domain, which is recognized as the most artifact-free Kirchhoff-type migration. The main advantages of the method are ease of implementation in an existing angle-migration code (two or three dimensions), user friendliness, ability to take into account multiorientation of the local geology as in faulted regions, and flexibility with respect to the quality of the estimated geologic dip field. Common-image gathers resulting from the method are free from migration noise and can be postprocessed in an easier way. We validate the approach and its possibilities on synthetic data examples with different levels of complexity.



Sign in / Sign up

Export Citation Format

Share Document