scholarly journals A Versatile Method for Depth Data Error Estimation in RGB-D Sensors

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3122 ◽  
Author(s):  
Elizabeth Cabrera ◽  
Luis Ortiz ◽  
Bruno Silva ◽  
Esteban Clua ◽  
Luiz Gonçalves

We propose a versatile method for estimating the RMS error of depth data provided by generic 3D sensors with the capability of generating RGB and depth (D) data of the scene, i.e., the ones based on techniques such as structured light, time of flight and stereo. A common checkerboard is used, the corners are detected and two point clouds are created, one with the real coordinates of the pattern corners and one with the corner coordinates given by the device. After a registration of these two clouds, the RMS error is computed. Then, using curve fittings methods, an equation is obtained that generalizes the RMS error as a function of the distance between the sensor and the checkerboard pattern. The depth errors estimated by our method are compared to those estimated by state-of-the-art approaches, validating its accuracy and utility. This method can be used to rapidly estimate the quality of RGB-D sensors, facilitating robotics applications as SLAM and object recognition.

Author(s):  
Luis Fernandez ◽  
Viviana Avila ◽  
Luiz Goncalves

We propose an approach for estimating the error in depth data provided by generic 3D sensors, which are modern devices capable of generating an image (RGB data) and a depth map (distance) or other similar 2.5D structure (e.g. stereo disparity) of the scene. Our approach starts capturing images of a checkerboard pattern devised for the method. Then proceed with the construction of a dense depth map using functions that generally comes with the device SDK (based on disparity or depth). The 2D processing of RGB data is performed next to find the checkerboard corners. Clouds of corner points are finally created (in 3D), over which an RMS error estimation is computed. We come up with a multi-platform system and its verification and evaluation has been done, using the development kit of the board nVIDIA Jetson TK1 with the MS Kinects v1/v2 and the Stereolabs ZED camera. So the main contribution is the error determination procedure that does not need any data set or benchmark, thus relying only on data acquired on-the-fly. With a simple checkerboard, our approach is able to determine the error for any device. Envisioned application is on 3D reconstruction for robotic vision, with a series of 3D vision sensors embarked in robots (UAV of type quadcopter and terrestrial robots) for high-precision map construction, which can be used for sensing and monitoring.


Author(s):  
Robert Niederheiser ◽  
Martin Mokroš ◽  
Julia Lange ◽  
Helene Petschko ◽  
Günther Prasicek ◽  
...  

Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. <br><br> We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. <br><br> While PhotoScan and Pix4D offer the user-friendliest workflows, they are also “black-box” programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.


Author(s):  
Robert Niederheiser ◽  
Martin Mokroš ◽  
Julia Lange ◽  
Helene Petschko ◽  
Günther Prasicek ◽  
...  

Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. &lt;br&gt;&lt;br&gt; We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. &lt;br&gt;&lt;br&gt; While PhotoScan and Pix4D offer the user-friendliest workflows, they are also “black-box” programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-3
Author(s):  
Sina Farsangi ◽  
Mohamed A. Naiel ◽  
Mark Lamm ◽  
Paul Fieguth

Structured Light (SL) patterns generated based on pseudo-random arrays are widely used for single-shot 3D reconstruction using projector-camera systems. These SL images consist of a set of tags with different appearances, where these patterns will be projected on a target surface, then captured by a camera and decoded. The precision of localizing these tags from captured camera images affects the quality of the pixel-correspondences between the projector and the camera, and consequently that of the derived 3D shape. In this paper, we incorporate a quadrilateral representation for the detected SL tags that allows the construction of robust and accurate pixel-correspondences and the application of a spatial rectification module that leads to high tag classification accuracy. When applying the proposed method to single-shot 3D reconstruction, we show the effectiveness of this method over a baseline in estimating denser and more accurate 3D point-clouds.


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 810 ◽  
Author(s):  
Erzhuo Che ◽  
Jaehoon Jung ◽  
Michael Olsen

Mobile Laser Scanning (MLS) is a versatile remote sensing technology based on Light Detection and Ranging (lidar) technology that has been utilized for a wide range of applications. Several previous reviews focused on applications or characteristics of these systems exist in the literature, however, reviews of the many innovative data processing strategies described in the literature have not been conducted in sufficient depth. To this end, we review and summarize the state of the art for MLS data processing approaches, including feature extraction, segmentation, object recognition, and classification. In this review, we first discuss the impact of the scene type to the development of an MLS data processing method. Then, where appropriate, we describe relevant generalized algorithms for feature extraction and segmentation that are applicable to and implemented in many processing approaches. The methods for object recognition and point cloud classification are further reviewed including both the general concepts as well as technical details. In addition, available benchmark datasets for object recognition and classification are summarized. Further, the current limitations and challenges that a significant portion of point cloud processing techniques face are discussed. This review concludes with our future outlook of the trends and opportunities of MLS data processing algorithms and applications.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4451
Author(s):  
Himri ◽  
Ridao ◽  
Gracias

This paper addresses the problem of object recognition from colorless 3D point clouds inunderwater environments. It presents a performance comparison of state-of-the-art global descriptors,which are readily available as open source code. The studied methods are intended to assistAutonomous Underwater Vehicles (AUVs) in performing autonomous interventions in underwaterInspection, Maintenance and Repair (IMR) applications. A set of test objects were chosen as beingrepresentative of IMR applications whose shape is typically known a priori. As such, CAD modelswere used to create virtual views of the objects under realistic conditions of added noise and varyingresolution. Extensive experiments were conducted from both virtual scans and from real data collectedwith an AUV equipped with a fast laser sensor developed in our research centre. The underwatertesting was conducted from a moving platform, which can create deformations in the perceived shapeof the objects. These effects are considerably more difficult to correct than in above-water counterparts,and therefore may affect the performance of the descriptor. Among other conclusions, the testing weconducted illustrated the importance of matching the resolution of the database scans and test scans,as this significantly impacted the performance of all descriptors except one. This paper contributes tothe state-of-the-art as being the first work on the comparison and performance evaluation of methodsfor underwater object recognition. It is also the first effort using comparison of methods for dataacquired with a free floating underwater platform.


Author(s):  
M. Adduci ◽  
K. Amplianitis ◽  
R. Reulke

Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


Author(s):  
Megha Chhabra ◽  
Manoj Kumar Shukla ◽  
Kiran Kumar Ravulakollu

: Latent fingerprints are unintentional finger skin impressions left as ridge patterns at crime scenes. A major challenge in latent fingerprint forensics is the poor quality of the lifted image from the crime scene. Forensics investigators are in permanent search of novel outbreaks of the effective technologies to capture and process low quality image. The accuracy of the results depends upon the quality of the image captured in the beginning, metrics used to assess the quality and thereafter level of enhancement required. The low quality of the image collected by low quality scanners, unstructured background noise, poor ridge quality, overlapping structured noise result in detection of false minutiae and hence reduce the recognition rate. Traditionally, Image segmentation and enhancement is partially done manually using help of highly skilled experts. Using automated systems for this work, differently challenging quality of images can be investigated faster. This survey amplifies the comparative study of various segmentation techniques available for latent fingerprint forensics.


Sign in / Sign up

Export Citation Format

Share Document