scholarly journals Calibration of Planar Reflectors Reshaping LiDAR’s Field of View

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6501
Author(s):  
Michał Pełka ◽  
Janusz Będkowski

This paper describes the calibration method for calculating parameters (position and orientation) of planar reflectors reshaping LiDAR’s (light detection and ranging) field of view. The calibration method is based on the reflection equation used in the ICP (Iterative Closest Point) optimization. A novel calibration process as the multi-view data registration scheme is proposed; therefore, the poses of the measurement instrument and parameters of planar reflectors are calculated simultaneously. The final metric measurement is more accurate compared with parameters retrieved from the mechanical design. Therefore, it is evident that the calibration process is required for affordable solutions where the mechanical design can differ from the inaccurate assembly. It is shown that the accuracy is less than 20 cm for almost all measurements preserving long-range capabilities. The experiment is performed based on Livox Mid-40 LiDAR augmented with six planar reflectors. The ground-truth data were collected using Z + F IMAGER 5010 3D Terrestrial Laser Scanner. The calibration method is independent of mechanical design and does not require any fiducial markers on the mirrors. This work fulfils the gap between rotating and Solid-State LiDARs since the field of view can be reshaped by planar reflectors, and the proposed method can preserve the metric accuracy. Thus, such discussion concludes the findings. We prepared an open-source project and provided all the necessary data for reproducing the experiments. That includes: Complete open-source code, the mechanical design of reflector assembly and the dataset which was used in this paper.

Author(s):  
C. Chen ◽  
X. Zou ◽  
M. Tian ◽  
J. Li ◽  
W. Wu ◽  
...  

In order to solve the automation of 3D indoor mapping task, a low cost multi-sensor robot laser scanning system is proposed in this paper. The multiple-sensor robot laser scanning system includes a panorama camera, a laser scanner, and an inertial measurement unit and etc., which are calibrated and synchronized together to achieve simultaneously collection of 3D indoor data. Experiments are undertaken in a typical indoor scene and the data generated by the proposed system are compared with ground truth data collected by a TLS scanner showing an accuracy of 99.2% below 0.25 meter, which explains the applicability and precision of the system in indoor mapping applications.


2016 ◽  
Vol 8 (1) ◽  
Author(s):  
Geoffrey Fairchild ◽  
Lalindra De Silva ◽  
Sara Y. Del Valle ◽  
Alberto M. Segre

Traditional disease surveillance systems suffer from several disadvantages, including reporting lags and antiquated technology, that have caused a movement towards internet-based disease surveillance systems. This study presents the use of Wikipedia article content in this sphere.  We demonstrate how a named-entity recognizer can be trained to tag case, death, and hospitalization counts in the article text. We also show that there are detailed time series data that are consistently updated that closely align with ground truth data.  We argue that Wikipedia can be used to create the first community-driven open-source emerging disease detection, monitoring, and repository system.


Author(s):  
P. Raumonen ◽  
E. Casella ◽  
K. Calders ◽  
S. Murphy ◽  
M. Åkerbloma, ◽  
...  

This paper presents a method for reconstructing automatically the quantitative structure model of every tree in a forest plot from terrestrial laser scanner data. A new feature is the automatic extraction of individual trees from the point cloud. The method is tested with a 30-m diameter English oak plot and a 80-m diameter Australian eucalyptus plot. For the oak plot the total biomass was overestimated by about 17 %, when compared to allometry (N = 15), and the modelling time was about 100 min with a laptop. For the eucalyptus plot the total biomass was overestimated by about 8.5 %, when compared to a destructive reference (N = 27), and the modelling time was about 160 min. The method provides accurate and fast tree modelling abilities for, e.g., biomass estimation and ground truth data for airborne measurements at a massive ground scale.


Heritage ◽  
2019 ◽  
Vol 2 (3) ◽  
pp. 1835-1851 ◽  
Author(s):  
Hafizur Rahaman ◽  
Erik Champion

The 3D reconstruction of real-world heritage objects using either a laser scanner or 3D modelling software is typically expensive and requires a high level of expertise. Image-based 3D modelling software, on the other hand, offers a cheaper alternative, which can handle this task with relative ease. There also exists free and open source (FOSS) software, with the potential to deliver quality data for heritage documentation purposes. However, contemporary academic discourse seldom presents survey-based feature lists or a critical inspection of potential production pipelines, nor typically provides direction and guidance for non-experts who are interested in learning, developing and sharing 3D content on a restricted budget. To address the above issues, a set of FOSS were studied based on their offered features, workflow, 3D processing time and accuracy. Two datasets have been used to compare and evaluate the FOSS applications based on the point clouds they produced. The average deviation to ground truth data produced by a commercial software application (Metashape, formerly called PhotoScan) was used and measured with CloudCompare software. 3D reconstructions generated from FOSS produce promising results, with significant accuracy, and are easy to use. We believe this investigation will help non-expert users to understand the photogrammetry and select the most suitable software for producing image-based 3D models at low cost for visualisation and presentation purposes.


2015 ◽  
Vol 13 ◽  
pp. 209-215
Author(s):  
B. Jaehn ◽  
P. Lindner ◽  
G. Wanielik

Abstract. A key component for automated driving is 360° environment detection. The recognition capabilities of modern sensors are always limited to their direct field of view. In urban areas a lot of objects occlude important areas of interest. The information captured by another sensor from another perspective could solve such occluded situations. Furthermore, the capabilities to detect and classify various objects in the surrounding can be improved by taking multiple views into account. In order to combine the data of two sensors into one coordinate system, a rigid transformation matrix has to be derived. The accuracy of modern e.g. satellite based relative pose estimation systems is not sufficient to guarantee a suitable alignment. Therefore, a registration based approach is used in this work which aligns the captured environment data of two sensors from different positions. Thus their relative pose estimation obtained by traditional methods is improved and the data can be fused. To support this we present an approach which utilizes the uncertainty information of modern tracking systems to determine the possible field of view of the other sensor. Furthermore, it is estimated which parts of the captured data is directly visible to both, taking occlusion and shadowing effects into account. Afterwards a registration method, based on the iterative closest point (ICP) algorithm, is applied to that data in order to get an accurate alignment. The contribution of the presented approch to the achievable accuracy is shown with the help of ground truth data from a LiDAR simulation within a 3-D crossroad model. Results show that a two dimensional position and heading estimation is sufficient to initialize a successful 3-D registration process. Furthermore it is shown which initial spatial alignment is necessary to obtain suitable registration results.


Author(s):  
C. Chen ◽  
B. S. Yang ◽  
S. Song

Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.


Author(s):  
N. F. Khalid ◽  
A. H. M. Din ◽  
K. M. Omar ◽  
M. F. A. Khanan ◽  
A. H. Omar ◽  
...  

Advanced Spaceborne Thermal Emission and Reflection Radiometer-Global Digital Elevation Model (ASTER GDEM), Shuttle Radar Topography Mission (SRTM), and Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) are freely available Digital Elevation Model (DEM) datasets for environmental modeling and studies. The quality of spatial resolution and vertical accuracy of the DEM data source has a great influence particularly on the accuracy specifically for inundation mapping. Most of the coastal inundation risk studies used the publicly available DEM to estimated the coastal inundation and associated damaged especially to human population based on the increment of sea level. In this study, the comparison between ground truth data from Global Positioning System (GPS) observation and DEM is done to evaluate the accuracy of each DEM. The vertical accuracy of SRTM shows better result against ASTER and GMTED10 with an RMSE of 6.054 m. On top of the accuracy, the correlation of DEM is identified with the high determination of coefficient of 0.912 for SRTM. For coastal zone area, DEMs based on airborne light detection and ranging (LiDAR) dataset was used as ground truth data relating to terrain height. In this case, the LiDAR DEM is compared against the new SRTM DEM after applying the scale factor. From the findings, the accuracy of the new DEM model from SRTM can be improved by applying scale factor. The result clearly shows that the value of RMSE exhibit slightly different when it reached 0.503 m. Hence, this new model is the most suitable and meets the accuracy requirement for coastal inundation risk assessment using open source data. The suitability of these datasets for further analysis on coastal management studies is vital to assess the potentially vulnerable areas caused by coastal inundation.


2021 ◽  
Author(s):  
Anand Seethepalli ◽  
Kundan Dhakal ◽  
Marcus Griffiths ◽  
Haichao Guo ◽  
Gregoire T. Freschet ◽  
...  

AbstractRoots are central to the function of natural and agricultural ecosystems by driving plant acquisition of soil resources and influencing the carbon cycle. Root characteristics like length, diameter, and volume are critical to measure to understand plant and soil functions. RhizoVision Explorer is an open-source software designed to enable researchers interested in roots by providing an easy-to-use interface, fast image processing, and reliable measurements. The default broken roots mode is intended for roots sampled from pots or soil cores, washed, and typically scanned on a flatbed scanner, and provides measurements like length, diameter, and volume. The optional whole root mode for complete root systems or root crowns provides additional measurements such as angles, root depth, and convex hull. Both modes support providing measurements grouped by defined diameter ranges, the inclusion of multiple regions of interest, and batch analysis. RhizoVision Explorer was successfully validated against ground truth data using a novel copper wire image set. In comparison, the current reference software, the commercial WinRhizo™, drastically underestimated volume when wires of different diameters were in the same image. Additionally, measurements were compared with WinRhizo™ and IJ_Rhizo using a simulated root image set, showing general agreement in software measurements, except for root volume. Finally, scanned root image sets acquired in different labs for the crop, herbaceous, and tree species were used to compare results from RhizoVision Explorer with WinRhizo™. The two software showed general agreement, except that WinRhizo™ substantially underestimated root volume relative to RhizoVision Explorer. In the current context of rapidly growing interest in root science, RhizoVision Explorer intends to become a reference software, improve the overall accuracy and replicability of root trait measurements, and provide a foundation for collaborative improvement and reliable access to all.Abstract Figure


Author(s):  
E.-K. Stathopoulou ◽  
M. Welponer ◽  
F. Remondino

Abstract. State-of-the-art automated image orientation (Structure from Motion) and dense image matching (Multiple View Stereo) methods commonly used to produce 3D information from 2D images can generate 3D results – such as point cloud or meshes – of varying geometric and visual quality. Pipelines are generally robust and reliable enough, mostly capable to process even large sets of unordered images, yet the final results often lack completeness and accuracy, especially while dealing with real-world cases where objects are typically characterized by complex geometries and textureless surfaces and obstacles or occluded areas may also occur. In this study we investigate three of the available commonly used open-source solutions, namely COLMAP, OpenMVG+OpenMVS and AliceVision, evaluating their results under diverse large scale scenarios. Comparisons and critical evaluation on the image orientation and dense point cloud generation algorithms is performed with respect to the corresponding ground truth data. The presented FBK-3DOM datasets are available for research purposes.


2020 ◽  
Author(s):  
Mariano I. Gabitto ◽  
Herve Marie-Nelly ◽  
Ari Pakman ◽  
Andras Pataki ◽  
Xavier Darzacq ◽  
...  

We consider the problem of single-molecule identification in super-resolution microscopy. Super-resolution microscopy overcomes the diffraction limit by localizing individual fluorescing molecules in a field of view. This is particularly difficult since each individual molecule appears and disappears randomly across time and because the total number of molecules in the field of view is unknown. Additionally, data sets acquired with super-resolution microscopes can contain a large number of spurious fluorescent fluctuations caused by background noise.To address these problems, we present a Bayesian nonparametric framework capable of identifying individual emitting molecules in super-resolved time series. We tackle the localization problem in the case in which each individual molecule is already localized in space. First, we collapse observations in time and develop a fast algorithm that builds upon the Dirichlet process. Next, we augment the model to account for the temporal aspect of fluorophore photo-physics. Finally, we assess the performance of our methods with ground-truth data sets having known biological structure.


Sign in / Sign up

Export Citation Format

Share Document