TITAN to Google Earth: Workflow for Data Processing for Mobile Terrestrial LIDAR

Author(s):  
Michael Martin

Terrestrial LIDAR scanners are pushing the boundaries of accurate urban modelling. Automation and the usability of tools used in feature abstraction and, to a lesser degree, presentation have become the chief concerns with this new technology. To broaden the use and impact of LIDAR in the geomatics, LiDAR datasets must be converted to feature-based representations without loss of precision. One approach, taken here, is to simultaneously examine the overall path that data takes through an organization and the operatordriven tasks carried out on the data as it is transformed from a raw point cloud to final product. We present a review of the current practices in LiDAR data processing and a foundation for future efforts to optimize. We examine alternative LIDAR processing workflows with two key questions in mind: computational efficiency - whether the process can be done using the tools at all - and tool complexity - what operator skill level is needed at each step. Using these workflows the usability of the specific software tools and the required knowledge to effectively carry out the procedures using the tools are examined. Preliminary results have yielded workflows that successfully translate LIDAR to 3D object models, highly decimated point representations of street data represented in Google Earth, and large volume point data flythroughs in ESRI ArcScene. We are documenting the pragmatic limits on each of these workflows and tools for endusers. Terrestrial LIDAR brings with it new innovations for spatial visualizations, but also questions of viability. The technology has proved valuable for specialized applications for experts, but can it be useful as a tool for proliferating 3d spatial information by and to non-experts. This study illustrates the issues associated with preparing 3d LIDAR data for presentation in mainstream visualization environments.

2020 ◽  
Vol 12 (15) ◽  
pp. 2497
Author(s):  
Rohan Bennett ◽  
Peter van Oosterom ◽  
Christiaan Lemmen ◽  
Mila Koeva

Land administration constitutes the socio-technical systems that govern land tenure, use, value and development within a jurisdiction. The land parcel is the fundamental unit of analysis. Each parcel has identifiable boundaries, associated rights, and linked parties. Spatial information is fundamental. It represents the boundaries between land parcels and is embedded in cadastral sketches, plans, maps and databases. The boundaries are expressed in these records using mathematical or graphical descriptions. They are also expressed physically with monuments or natural features. Ideally, the recorded and physical expressions should align, however, in practice, this may not occur. This means some boundaries may be physically invisible, lacking accurate documentation, or potentially both. Emerging remote sensing tools and techniques offers great potential. Historically, the measurements used to produce recorded boundary representations were generated from ground-based surveying techniques. The approach was, and remains, entirely appropriate in many circumstances, although it can be timely, costly, and may only capture very limited contextual boundary information. Meanwhile, advances in remote sensing and photogrammetry offer improved measurement speeds, reduced costs, higher image resolutions, and enhanced sampling granularity. Applications of unmanned aerial vehicles (UAV), laser scanning, both airborne and terrestrial (LiDAR), radar interferometry, machine learning, and artificial intelligence techniques, all provide examples. Coupled with emergent societal challenges relating to poverty reduction, rapid urbanisation, vertical development, and complex infrastructure management, the contemporary motivation to use these new techniques is high. Fundamentally, they enable more rapid, cost-effective, and tailored approaches to 2D and 3D land data creation, analysis, and maintenance. This Special Issue hosts papers focusing on this intersection of emergent remote sensing tools and techniques, applied to domain of land administration.


2007 ◽  
Vol 46 (22) ◽  
pp. 4879 ◽  
Author(s):  
Valery Shcherbakov

2021 ◽  
Vol 13 (13) ◽  
pp. 2473
Author(s):  
Qinglie Yuan ◽  
Helmi Zulhaidi Mohd Shafri ◽  
Aidi Hizami Alias ◽  
Shaiful Jahari Hashim

Automatic building extraction has been applied in many domains. It is also a challenging problem because of the complex scenes and multiscale. Deep learning algorithms, especially fully convolutional neural networks (FCNs), have shown robust feature extraction ability than traditional remote sensing data processing methods. However, hierarchical features from encoders with a fixed receptive field perform weak ability to obtain global semantic information. Local features in multiscale subregions cannot construct contextual interdependence and correlation, especially for large-scale building areas, which probably causes fragmentary extraction results due to intra-class feature variability. In addition, low-level features have accurate and fine-grained spatial information for tiny building structures but lack refinement and selection, and the semantic gap of across-level features is not conducive to feature fusion. To address the above problems, this paper proposes an FCN framework based on the residual network and provides the training pattern for multi-modal data combining the advantage of high-resolution aerial images and LiDAR data for building extraction. Two novel modules have been proposed for the optimization and integration of multiscale and across-level features. In particular, a multiscale context optimization module is designed to adaptively generate the feature representations for different subregions and effectively aggregate global context. A semantic guided spatial attention mechanism is introduced to refine shallow features and alleviate the semantic gap. Finally, hierarchical features are fused via the feature pyramid network. Compared with other state-of-the-art methods, experimental results demonstrate superior performance with 93.19 IoU, 97.56 OA on WHU datasets and 94.72 IoU, 97.84 OA on the Boston dataset, which shows that the proposed network can improve accuracy and achieve better performance for building extraction.


Author(s):  
Mohammad Pashaei ◽  
Michael J. Starek ◽  
Philippe Tissot ◽  
Jacob Berryhill

CATENA ◽  
2016 ◽  
Vol 142 ◽  
pp. 269-280 ◽  
Author(s):  
F. Neugirg ◽  
A. Kaiser ◽  
A. Huber ◽  
T. Heckmann ◽  
M. Schindewolf ◽  
...  

2021 ◽  
Author(s):  
Nicolas C. Barth ◽  
Greg M. Stock ◽  
Kinnari Atit

Abstract. This study highlights a Geology of Yosemite Valley virtual field trip (VFT) and companion exercises produced as a four-part module to substitute for physical field experiences. The VFT is created as an Earth project in Google Earth Web, a versatile format that allows access through a web browser or Google Earth app with the sharing of an internet address. Many dynamic resources can be used for VFT stops through use of the Google Earth Engine (global satellite imagery draped on topography, 360° street-level imagery, user-submitted 360° photospheres). Images, figures, videos, and narration can be embedded into VFT stops. Hyperlinks allow for a wide range of external resources to be incorporated; optional background resources help reduce the knowledge gap between general public and upper-division students, ensuring VFTs can be broadly accessible. Like many in-person field trips, there is a script with learning goals for each stop, but also an opportunity to learn through exploration as the viewer can dynamically change their vantage at each stop (i.e. guided discovery learning). This interactive VFT format scaffolds students’ spatial skills and encourages attention to be focused on a stop’s critical spatial information. The progression from VFT to mapping exercise to geologically-reasoned decision-making results in high quality student work; students find it engaging, enjoyable, and educational.


Author(s):  
A. S. Garov ◽  
I. P. Karachevtseva ◽  
E. V. Matveev ◽  
A. E. Zubarev ◽  
I. V. Florinsky

We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (<a href="http://cartsrv.mexlab.ru/geoportal"target="_blank">http://cartsrv.mexlab.ru/geoportal</a>) will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.


2017 ◽  
Vol 1 (2) ◽  
pp. 661-670 ◽  
Author(s):  
Willem Frans Beex

Light Detection And Ranging or Laser Imaging Detection And Ranging (LiDAR) is not really a new technology. However, it does provide the data from which accurate models of the natural land surface completely stripped of buildings and vegetation can be derived. Interestingly for Cultural Heritage and Archaeology, most of the data is already freely available for research. This is certainly the case in the Netherlands, with the “Actueel Hoogtemodel Nederland 2”, or “AHN2”. The density of the measured points is at least 50 centimetres, which means that the remains of structures larger than one by one metre can be detected. As a result, many unknown structures have been discovered with it. However, these excellent results have blinded many Cultural Heritage and Archaeology practitioners to obvious mistakes when interpreting LiDAR data. This paper is intended to highlight best-practices for the use of LiDAR data by Cultural Heritage professionals.


Sign in / Sign up

Export Citation Format

Share Document