scholarly journals REMOTELY SENSED DATA FUSION IN MODERN AGE ARCHAEOLOGY AND MILITARY HISTORICAL RECONSTRUCTION

Author(s):  
A. Juhász ◽  
H. Neuberger

LiDAR technology has become one of the major remote sensing methods in the last few years. There are several areas, where the scanned 3D point clouds can be used very efficiently. In our study we review the potential applications of LiDAR data in military historical reconstruction. Obviously, the base of this kind of investigation must be the archive data, but it is an interesting challenge to integrate a cutting edge method into such tasks. The LiDAR technology can be very useful, especially in vegetation covered areas, where the conventional remote sensing technologies are mostly inefficient. We review two typical sample projects where we integrated LiDAR data in military historical GIS reconstruction. Finally, we summarize, how laser scanned data can support the different parts of reconstruction work and define the technological steps of LiDAR data processing.

Author(s):  
A. Juhász ◽  
H. Neuberger

LiDAR technology has become one of the major remote sensing methods in the last few years. There are several areas, where the scanned 3D point clouds can be used very efficiently. In our study we review the potential applications of LiDAR data in military historical reconstruction. Obviously, the base of this kind of investigation must be the archive data, but it is an interesting challenge to integrate a cutting edge method into such tasks. The LiDAR technology can be very useful, especially in vegetation covered areas, where the conventional remote sensing technologies are mostly inefficient. We review two typical sample projects where we integrated LiDAR data in military historical GIS reconstruction. Finally, we summarize, how laser scanned data can support the different parts of reconstruction work and define the technological steps of LiDAR data processing.


2021 ◽  
Vol 13 (13) ◽  
pp. 2485
Author(s):  
Yi-Chun Lin ◽  
Raja Manish ◽  
Darcy Bullock ◽  
Ayman Habib

Maintenance of roadside ditches is important to avoid localized flooding and premature failure of pavements. Scheduling effective preventative maintenance requires a reasonably detailed mapping of the ditch profile to identify areas in need of excavation to remove long-term sediment accumulation. This study utilizes high-resolution, high-quality point clouds collected by mobile LiDAR mapping systems (MLMS) for mapping roadside ditches and performing hydrological analyses. The performance of alternative MLMS units, including an unmanned aerial vehicle, an unmanned ground vehicle, a portable backpack system along with its vehicle-mounted version, a medium-grade wheel-based system, and a high-grade wheel-based system, is evaluated. Point clouds from all the MLMS units are in agreement within the ±3 cm range for solid surfaces and ±7 cm range for vegetated areas along the vertical direction. The portable backpack system that could be carried by a surveyor or mounted on a vehicle is found to be the most cost-effective method for mapping roadside ditches, followed by the medium-grade wheel-based system. Furthermore, a framework for ditch line characterization is proposed and tested using datasets acquired by the medium-grade wheel-based and vehicle-mounted portable systems over a state highway. An existing ground-filtering approach—cloth simulation—is modified to handle variations in point density of mobile LiDAR data. Hydrological analyses, including flow direction and flow accumulation, are applied to extract the drainage network from the digital terrain model (DTM). Cross-sectional/longitudinal profiles of the ditch are automatically extracted from the LiDAR data and visualized in 3D point clouds and 2D images. The slope derived from the LiDAR data turned out to be very close to the highway cross slope design standards of 2% on driving lanes, 4% on shoulders, and a 6-by-1 slope for ditch lines.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3568 ◽  
Author(s):  
Takayuki Shinohara ◽  
Haoyi Xiu ◽  
Masashi Matsuoka

In the computer vision field, many 3D deep learning models that directly manage 3D point clouds (proposed after PointNet) have been published. Moreover, deep learning-based-techniques have demonstrated state-of-the-art performance for supervised learning tasks on 3D point cloud data, such as classification and segmentation tasks for open datasets in competitions. Furthermore, many researchers have attempted to apply these deep learning-based techniques to 3D point clouds observed by aerial laser scanners (ALSs). However, most of these studies were developed for 3D point clouds without radiometric information. In this paper, we investigate the possibility of using a deep learning method to solve the semantic segmentation task of airborne full-waveform light detection and ranging (lidar) data that consists of geometric information and radiometric waveform data. Thus, we propose a data-driven semantic segmentation model called the full-waveform network (FWNet), which handles the waveform of full-waveform lidar data without any conversion process, such as projection onto a 2D grid or calculating handcrafted features. Our FWNet is based on a PointNet-based architecture, which can extract the local and global features of each input waveform data, along with its corresponding geographical coordinates. Subsequently, the classifier consists of 1D convolutional operational layers, which predict the class vector corresponding to the input waveform from the extracted local and global features. Our trained FWNet achieved higher scores in its recall, precision, and F1 score for unseen test data—higher scores than those of previously proposed methods in full-waveform lidar data analysis domain. Specifically, our FWNet achieved a mean recall of 0.73, a mean precision of 0.81, and a mean F1 score of 0.76. We further performed an ablation study, that is assessing the effectiveness of our proposed method, of the above-mentioned metric. Moreover, we investigated the effectiveness of our PointNet based local and global feature extraction method using the visualization of the feature vector. In this way, we have shown that our network for local and global feature extraction allows training with semantic segmentation without requiring expert knowledge on full-waveform lidar data or translation into 2D images or voxels.


Author(s):  
T. Wakita ◽  
J. Susaki

In this study, we propose a method to accurately extract vegetation from terrestrial three-dimensional (3D) point clouds for estimating landscape index in urban areas. Extraction of vegetation in urban areas is challenging because the light returned by vegetation does not show as clear patterns as man-made objects and because urban areas may have various objects to discriminate vegetation from. The proposed method takes a multi-scale voxel approach to effectively extract different types of vegetation in complex urban areas. With two different voxel sizes, a process is repeated that calculates the eigenvalues of the planar surface using a set of points, classifies voxels using the approximate curvature of the voxel of interest derived from the eigenvalues, and examines the connectivity of the valid voxels. We applied the proposed method to two data sets measured in a residential area in Kyoto, Japan. The validation results were acceptable, with F-measures of approximately 95% and 92%. It was also demonstrated that several types of vegetation were successfully extracted by the proposed method whereas the occluded vegetation were omitted. We conclude that the proposed method is suitable for extracting vegetation in urban areas from terrestrial light detection and ranging (LiDAR) data. In future, the proposed method will be applied to mobile LiDAR data and the performance of the method against lower density of point clouds will be examined.


Author(s):  
A. C. Blanco ◽  
A. M. Tamondong ◽  
A. M. C. Perez ◽  
M. R. C. O. Ang ◽  
E. C. Paringit

The Philippines embarked on a nationwide mapping endeavour through the Disaster Risk and Exposure Assessment for Mitigation (DREAM) Program of the University of the Philippines and the Department of Science and Technology (DOST). The derived accurate digital terrain models (DTMs) are used in flood models to generate risk maps and early warning system. With the availability of LiDAR data sets, the Phil-LiDAR 2 program was conceptualized as complementary to existing programs of various national government agencies and to assist local government units. Phil-LiDAR 2 aims to provide an updated natural resource inventory as detailed as possible using LiDAR point clouds, LiDAR derivative products, orthoimages and other RS data. The program assesses the following natural resources over a period of three years from July 2014: agricultural, forest, coastal, water, and renewable energy. To date, methodologies for extracting features from LiDAR data sets have been developed. The methodologies are based on a combination of object-based image analysis, pixel-based image analysis, optimization of feature selection and parameter values, and field surveys. One of the features of the Phil-LiDAR 2 program is the involvement of fifteen (15) universities throughout the country. Most of these do not have prior experience in remote sensing and mapping. With such, the program has embarked on a massive training and mentoring program. The program is producing more than 200 young RS specialists who are protecting the environment through RS and other geospatial technologies. This paper presents the program, the methodologies so far developed, and the sample outputs.


2019 ◽  
Vol 8 (7) ◽  
pp. 296 ◽  
Author(s):  
Doug Stead ◽  
Davide Donati ◽  
Andrea Wolter ◽  
Matthieu Sturzenegger

The stability and deformation behavior of high rock slopes depends on many factors, including geological structures, lithology, geomorphic processes, stress distribution, and groundwater regime. A comprehensive mapping program is, therefore, required to investigate and assess the stability of high rock slopes. However, slope steepness, rockfalls and ongoing instability, difficult terrain, and other safety concerns may prevent the collection of data by means of traditional field techniques. Therefore, remote sensing methods are often critical to perform an effective investigation. In this paper, we describe the application of field and remote sensing approaches for the characterization of rock slopes at various scale and distances. Based on over 15 years of the experience gained by the Engineering Geology and Resource Geotechnics Research Group at Simon Fraser University (Vancouver, Canada), we provide a summary of the potential applications, advantages, and limitations of varied remote sensing techniques for comprehensive characterization of rock slopes. We illustrate how remote sensing methods have been critical in performing rock slope investigations. However, we observe that traditional field methods still remain indispensable to collect important intact rock and discontinuity condition data.


2021 ◽  
Author(s):  
Christine Fey ◽  
Klaus Voit ◽  
Volker Wichmann ◽  
Christian Zangerl ◽  
Volkmar Mair

<p>The use of high resolution 3D point clouds and digital terrain models (DTM) from laserscanning or photogrammetry becomes more and more state of the art in landslide studies. Based on a multi-temporal terrestrial laserscanning (TLS) dataset of the deep-seated compound rockslide Laatsch, South Tyrol, we present a multi-method approach to characterize processes such as sliding, falling, toppling, and flows. Sliding is the predominant process of the Laatsch rockslide, accompanied by secondary processes such as rockfall, debris flows and erosion. The presented methods are applicable to all kind of 3D point clouds and not limited to TLS data. For remote sensing-based landslide analyses a distinction between two classes of surface processes is necessary: i) processes where the original surface is destroyed and no correlations between the shape and texture of the pre- and post-failure surfaces can be found (falls, rapid flows, rapid slides) and ii) processes where the surface is displaced without major surface changes (slow slides, slow flows and toppling). For processes where the original surface is destroyed, the distance between the pre- and post-failure terrain surface is measured with the aim to delineate the scarp and depositional area, and to quantify the failure volume as well as the scarp thickness. With DTMs of differences (DoD), the distance is measured along the plumb line. DoDs can be used to quickly and reliably assess the volume and extent of fall processes on flat to moderate slopes. For steep or even overhanging terrain, a 3D distance measurement approach must be used, where the distance is measured along the local surface normal. After 3D distance measurement, the volume of steep scarp areas can be calculated by first rotating, the point cloud into the horizontal plane (by making use of the average surface normal) and by interpolating the rotated 3D distance measurement values into a grid. Summing up the distances and multiplying with the cell area of the grid yields the scrap rupture volume. Remote sensing-based analyses of sliding and toppling processes are more complex compared to fall processes because the displaced surface patch must be detected in both surveys. Displacement analyses based on image correlation of ambient occlusion shaded relief images, together with DTMs of both epochs, are used to analyse the displacement of the entire rockslide area. The result is a map with displacement vectors. Disadvantages of image correlation are the coarse spatial resolution and the inability, as it is a 2.5D approach, to deal with steep slope parts. To analyse the displacement and toppling of steep rock walls a combination of the 3D distance measurement approach and an iterative closest point (ICP) based approach is applied. The 3D distance measurement values are clustered and used for a segmentation of the point cloud. In a next step, the ICP is applied on each of the resulting segments. This approach can deal with 3D displacements. The results are still sensitive towards the geometric contrast within the segments and not fully automated yet.</p>


Author(s):  
Jianqing Wu ◽  
Hao Xu ◽  
Yuan Sun ◽  
Jianying Zheng ◽  
Rui Yue

The high-resolution micro traffic data (HRMTD) of all roadway users is important for serving the connected-vehicle system in mixed traffic situations. The roadside LiDAR sensor gives a solution to providing HRMTD from real-time 3D point clouds of its scanned objects. Background filtering is the preprocessing step to obtain the HRMTD of different roadway users from roadside LiDAR data. It can significantly reduce the data processing time and improve the vehicle/pedestrian identification accuracy. An algorithm is proposed in this paper, based on the spatial distribution of laser points, which filters both static and moving background efficiently. Various thresholds of point density are applied in this algorithm to exclude background at different distances from the roadside sensor. The case study shows that the algorithm can filter background LiDAR points in different situations (different road geometries, different traffic demands, day/night time, different speed limits). Vehicle and pedestrian shape can be retained well after background filtering. The low computational load guarantees this method can be applied for real-time data processing such as vehicle monitoring and pedestrian tracking.


2021 ◽  
Vol 13 (15) ◽  
pp. 2999
Author(s):  
Ramón Alberto Díaz-Varela ◽  
Eduardo González-Ferreiro

Society is increasingly aware of the important role of forests and other woodlands as cultural heritage and as providers of different ecosystem services, such as biomass provision, soil protection, hydrological regulation, biodiversity conservation and carbon sequestration, among others [...]


Sign in / Sign up

Export Citation Format

Share Document