scholarly journals GEOMETRIC QUALITY ASSESSMENT OF LIDAR DATA BASED ON SWATH OVERLAP

Author(s):  
A. Sampath ◽  
H. K. Heidemann ◽  
G. L. Stensaas

This paper provides guidelines on quantifying the relative horizontal and vertical errors observed between conjugate features in the overlapping regions of lidar data. The quantification of these errors is important because their presence quantifies the geometric quality of the data. A data set can be said to have good geometric quality if measurements of identical features, regardless of their position or orientation, yield identical results. Good geometric quality indicates that the data are produced using sensor models that are working as they are mathematically designed, and data acquisition processes are not introducing any unforeseen distortion in the data. High geometric quality also leads to high geolocation accuracy of the data when the data acquisition process includes coupling the sensor with geopositioning systems. Current specifications (e.g. Heidemann 2014) do not provide adequate means to quantitatively measure these errors, even though they are required to be reported. Current accuracy measurement and reporting practices followed in the industry and as recommended by data specification documents also potentially underestimate the inter-swath errors, including the presence of systematic errors in lidar data. Hence they pose a risk to the user in terms of data acceptance (i.e. a higher potential for Type II error indicating risk of accepting potentially unsuitable data). For example, if the overlap area is too small or if the sampled locations are close to the center of overlap, or if the errors are sampled in flat regions when there are residual pitch errors in the data, the resultant Root Mean Square Differences (RMSD) can still be small. To avoid this, the following are suggested to be used as criteria for defining the inter-swath quality of data: <br><br> a) Median Discrepancy Angle <br><br> b) Mean and RMSD of Horizontal Errors using DQM measured on sloping surfaces <br><br> c) RMSD for sampled locations from flat areas (defined as areas with less than 5 degrees of slope) <br><br> It is suggested that 4000-5000 points are uniformly sampled in the overlapping regions of the point cloud, and depending on the surface roughness, to measure the discrepancy between swaths. Care must be taken to sample only areas of single return points only. Point-to-Plane distance based data quality measures are determined for each sample point. These measurements are used to determine the above mentioned parameters. This paper details the measurements and analysis of measurements required to determine these metrics, i.e. Discrepancy Angle, Mean and RMSD of errors in flat regions and horizontal errors obtained using measurements extracted from sloping regions (slope greater than 10 degrees). The research is a result of an ad-hoc joint working group of the US Geological Survey and the American Society for Photogrammetry and Remote Sensing (ASPRS) Airborne Lidar Committee.

Author(s):  
A. Sampath ◽  
H. K. Heidemann ◽  
G. L. Stensaas

This paper provides guidelines on quantifying the relative horizontal and vertical errors observed between conjugate features in the overlapping regions of lidar data. The quantification of these errors is important because their presence quantifies the geometric quality of the data. A data set can be said to have good geometric quality if measurements of identical features, regardless of their position or orientation, yield identical results. Good geometric quality indicates that the data are produced using sensor models that are working as they are mathematically designed, and data acquisition processes are not introducing any unforeseen distortion in the data. High geometric quality also leads to high geolocation accuracy of the data when the data acquisition process includes coupling the sensor with geopositioning systems. Current specifications (e.g. Heidemann 2014) do not provide adequate means to quantitatively measure these errors, even though they are required to be reported. Current accuracy measurement and reporting practices followed in the industry and as recommended by data specification documents also potentially underestimate the inter-swath errors, including the presence of systematic errors in lidar data. Hence they pose a risk to the user in terms of data acceptance (i.e. a higher potential for Type II error indicating risk of accepting potentially unsuitable data). For example, if the overlap area is too small or if the sampled locations are close to the center of overlap, or if the errors are sampled in flat regions when there are residual pitch errors in the data, the resultant Root Mean Square Differences (RMSD) can still be small. To avoid this, the following are suggested to be used as criteria for defining the inter-swath quality of data: <br><br> a) Median Discrepancy Angle <br><br> b) Mean and RMSD of Horizontal Errors using DQM measured on sloping surfaces <br><br> c) RMSD for sampled locations from flat areas (defined as areas with less than 5 degrees of slope) <br><br> It is suggested that 4000-5000 points are uniformly sampled in the overlapping regions of the point cloud, and depending on the surface roughness, to measure the discrepancy between swaths. Care must be taken to sample only areas of single return points only. Point-to-Plane distance based data quality measures are determined for each sample point. These measurements are used to determine the above mentioned parameters. This paper details the measurements and analysis of measurements required to determine these metrics, i.e. Discrepancy Angle, Mean and RMSD of errors in flat regions and horizontal errors obtained using measurements extracted from sloping regions (slope greater than 10 degrees). The research is a result of an ad-hoc joint working group of the US Geological Survey and the American Society for Photogrammetry and Remote Sensing (ASPRS) Airborne Lidar Committee.


2013 ◽  
Vol 318 ◽  
pp. 572-575
Author(s):  
Li Li Yu ◽  
Yu Hong Li ◽  
Ai Feng Wang

In this paper a quality monitoring system for seismic while drilling (SWD) that integrates the whole process of data acquisition was developed. The acquisition equipment, network status and signals of accelerometer and geophone were monitored real-time. With fast signal analysis and quality evaluation, the acquisition parameters and drilling engineering parameters can be adjusted timely. The application of the system can improve the quality of data acquisition and provide subsequent processing and interpretation with high qualified reliable data.


2016 ◽  
Vol 13 (4) ◽  
pp. 961-973 ◽  
Author(s):  
W. Simonson ◽  
P. Ruiz-Benito ◽  
F. Valladares ◽  
D. Coomes

Abstract. Woodlands represent highly significant carbon sinks globally, though could lose this function under future climatic change. Effective large-scale monitoring of these woodlands has a critical role to play in mitigating for, and adapting to, climate change. Mediterranean woodlands have low carbon densities, but represent important global carbon stocks due to their extensiveness and are particularly vulnerable because the region is predicted to become much hotter and drier over the coming century. Airborne lidar is already recognized as an excellent approach for high-fidelity carbon mapping, but few studies have used multi-temporal lidar surveys to measure carbon fluxes in forests and none have worked with Mediterranean woodlands. We use a multi-temporal (5-year interval) airborne lidar data set for a region of central Spain to estimate above-ground biomass (AGB) and carbon dynamics in typical mixed broadleaved and/or coniferous Mediterranean woodlands. Field calibration of the lidar data enabled the generation of grid-based maps of AGB for 2006 and 2011, and the resulting AGB change was estimated. There was a close agreement between the lidar-based AGB growth estimate (1.22 Mg ha−1 yr−1) and those derived from two independent sources: the Spanish National Forest Inventory, and a tree-ring based analysis (1.19 and 1.13 Mg ha−1 yr−1, respectively). We parameterised a simple simulator of forest dynamics using the lidar carbon flux measurements, and used it to explore four scenarios of fire occurrence. Under undisturbed conditions (no fire) an accelerating accumulation of biomass and carbon is evident over the next 100 years with an average carbon sequestration rate of 1.95 Mg C ha−1 yr−1. This rate reduces by almost a third when fire probability is increased to 0.01 (fire return rate of 100 years), as has been predicted under climate change. Our work shows the power of multi-temporal lidar surveying to map woodland carbon fluxes and provide parameters for carbon dynamics models. Space deployment of lidar instruments in the near future could open the way for rolling out wide-scale forest carbon stock monitoring to inform management and governance responses to future environmental change.


Author(s):  
Manjunath Ramachandra

The data gathered from the sources are often noisy Poor quality of data results in business losses that keep increasing down the supply chain. The end customer finds it absolutely useless and misguiding. So, cleansing of data is to be performed immediately and automatically after the data acquisition. This chapter provides the different techniques for data cleansing and processing to achieve the same.


2020 ◽  
Vol 12 (11) ◽  
pp. 1702 ◽  
Author(s):  
Thanh Huy Nguyen ◽  
Sylvie Daniel ◽  
Didier Guériot ◽  
Christophe Sintès ◽  
Jean-Marc Le Caillec

Automatic extraction of buildings in urban and residential scenes has become a subject of growing interest in the domain of photogrammetry and remote sensing, particularly since the mid-1990s. Active contour model, colloquially known as snake model, has been studied to extract buildings from aerial and satellite imagery. However, this task is still very challenging due to the complexity of building size, shape, and its surrounding environment. This complexity leads to a major obstacle for carrying out a reliable large-scale building extraction, since the involved prior information and assumptions on building such as shape, size, and color cannot be generalized over large areas. This paper presents an efficient snake model to overcome such a challenge, called Super-Resolution-based Snake Model (SRSM). The SRSM operates on high-resolution Light Detection and Ranging (LiDAR)-based elevation images—called z-images—generated by a super-resolution process applied to LiDAR data. The involved balloon force model is also improved to shrink or inflate adaptively, instead of inflating continuously. This method is applicable for a large scale such as city scale and even larger, while having a high level of automation and not requiring any prior knowledge nor training data from the urban scenes (hence unsupervised). It achieves high overall accuracy when tested on various datasets. For instance, the proposed SRSM yields an average area-based Quality of 86.57% and object-based Quality of 81.60% on the ISPRS Vaihingen benchmark datasets. Compared to other methods using this benchmark dataset, this level of accuracy is highly desirable even for a supervised method. Similarly desirable outcomes are obtained when carrying out the proposed SRSM on the whole City of Quebec (total area of 656 km2), yielding an area-based Quality of 62.37% and an object-based Quality of 63.21%.


2016 ◽  
Vol 25 (3) ◽  
pp. 431-440 ◽  
Author(s):  
Archana Purwar ◽  
Sandeep Kumar Singh

AbstractThe quality of data is an important task in the data mining. The validity of mining algorithms is reduced if data is not of good quality. The quality of data can be assessed in terms of missing values (MV) as well as noise present in the data set. Various imputation techniques have been studied in MV study, but little attention has been given on noise in earlier work. Moreover, to the best of knowledge, no one has used density-based spatial clustering of applications with noise (DBSCAN) clustering for MV imputation. This paper proposes a novel technique density-based imputation (DBSCANI) built on density-based clustering to deal with incomplete values in the presence of noise. Density-based clustering algorithm proposed by Kriegal groups the objects according to their density in spatial data bases. The high-density regions are known as clusters, and the low-density regions refer to the noise objects in the data set. A lot of experiments have been performed on the Iris data set from life science domain and Jain’s (2D) data set from shape data sets. The performance of the proposed method is evaluated using root mean square error (RMSE) as well as it is compared with existing K-means imputation (KMI). Results show that our method is more noise resistant than KMI on data sets used under study.


Testing is very essential in Data warehouse systems for decision making because the accuracy, validation and correctness of data depends on it. By looking to the characteristics and complexity of iData iwarehouse, iin ithis ipaper, iwe ihave itried ito ishow the scope of automated testing in assuring ibest data iwarehouse isolutions. Firstly, we developed a data set generator for creating synthetic but near to real data; then in isynthesized idata, with ithe help of hand icoded Extraction, Transformation and Loading (ETL) routine, anomalies are classified. For the quality assurance of data for a Data warehouse and to give the idea of how important the iExtraction, iTransformation iand iLoading iis, some very important test cases were identified. After that, to ensure the quality of data, the procedures of automated testing iwere iembedded iin ihand icoded iETL iroutine. Statistical analysis was done and it revealed a big enhancement in the quality of data with the procedures of automated testing. It enhances the fact that automated testing gives promising results in the data warehouse quality. For effective and easy maintenance of distributed data,a novel architecture was proposed. Although the desired result of this research is achieved successfully and the objectives are promising, but still there's a need to validate the results with the real life environment, as this research was done in simulated environment, which may not always give the desired results in real life environment. Hence, the overall potential of the proposed architecture can be seen until it is deployed to manage the real data which is distributed globally.


Sign in / Sign up

Export Citation Format

Share Document