scholarly journals THE TOPOGRAPHIC DATA DELUGE – COLLECTING AND MAINTAINING DATA IN A 21ST CENTURY MAPPING AGENCY

Author(s):  
D. A. Holland ◽  
C. Pook ◽  
D. Capstick ◽  
A. Hemmings

In the last few years, the number of sensors and data collection systems available to a mapping agency has grown considerably. In the field, in addition to total stations measuring position, angles and distances, the surveyor can choose from hand-held GPS devices, multi-lens imaging systems or laser scanners, which may be integrated with a laptop or tablet to capture topographic data directly in the field. These systems are joined by mobile mapping solutions, mounted on large or small vehicles, or sometimes even on a backpack carried by a surveyor walking around a site. Such systems allow the raw data to be collected rapidly in the field, while the interpretation of the data can be performed back in the office at a later date. In the air, large format digital cameras and airborne lidar sensors are being augmented with oblique camera systems, taking multiple views at each camera position and being used to create more realistic 3D city models. Lower down in the atmosphere, Unmanned Aerial Vehicles (or Remotely Piloted Aircraft Systems) have suddenly become ubiquitous. Hundreds of small companies have sprung up, providing images from UAVs using ever more capable consumer cameras. It is now easy to buy a 42 megapixel camera off the shelf at the local camera shop, and Canon recently announced that they are developing a 250 megapixel sensor for the consumer market. While these sensors may not yet rival the metric cameras used by today’s photogrammetrists, the rapid developments in sensor technology could eventually lead to the commoditization of high-resolution camera systems. With data streaming in from so many sources, the main issue for a mapping agency is how to interpret, store and update the data in such a way as to enable the creation and maintenance of the end product. This might be a topographic map, ortho-image or a digital surface model today, but soon it is just as likely to be a 3D point cloud, textured 3D mesh, 3D city model, or Building Information Model (BIM) with all the data interpretation and modelling that entails. In this paper, we describe research/investigations into the developing technologies and outline the findings for a National Mapping Agency (NMA). We also look at the challenges that these new data collection systems will bring to an NMA, and suggest ways that we may work to meet these challenges and deliver the products desired by our users.

Author(s):  
D. A. Holland ◽  
C. Pook ◽  
D. Capstick ◽  
A. Hemmings

In the last few years, the number of sensors and data collection systems available to a mapping agency has grown considerably. In the field, in addition to total stations measuring position, angles and distances, the surveyor can choose from hand-held GPS devices, multi-lens imaging systems or laser scanners, which may be integrated with a laptop or tablet to capture topographic data directly in the field. These systems are joined by mobile mapping solutions, mounted on large or small vehicles, or sometimes even on a backpack carried by a surveyor walking around a site. Such systems allow the raw data to be collected rapidly in the field, while the interpretation of the data can be performed back in the office at a later date. In the air, large format digital cameras and airborne lidar sensors are being augmented with oblique camera systems, taking multiple views at each camera position and being used to create more realistic 3D city models. Lower down in the atmosphere, Unmanned Aerial Vehicles (or Remotely Piloted Aircraft Systems) have suddenly become ubiquitous. Hundreds of small companies have sprung up, providing images from UAVs using ever more capable consumer cameras. It is now easy to buy a 42 megapixel camera off the shelf at the local camera shop, and Canon recently announced that they are developing a 250 megapixel sensor for the consumer market. While these sensors may not yet rival the metric cameras used by today’s photogrammetrists, the rapid developments in sensor technology could eventually lead to the commoditization of high-resolution camera systems. With data streaming in from so many sources, the main issue for a mapping agency is how to interpret, store and update the data in such a way as to enable the creation and maintenance of the end product. This might be a topographic map, ortho-image or a digital surface model today, but soon it is just as likely to be a 3D point cloud, textured 3D mesh, 3D city model, or Building Information Model (BIM) with all the data interpretation and modelling that entails. In this paper, we describe research/investigations into the developing technologies and outline the findings for a National Mapping Agency (NMA). We also look at the challenges that these new data collection systems will bring to an NMA, and suggest ways that we may work to meet these challenges and deliver the products desired by our users.


2021 ◽  
Vol 3 (1) ◽  
pp. 1-6
Author(s):  
Hazri Hassan ◽  
Syed Ahmad Fadhli Syed Abdul Rahman

The mapping industry is one of the areas that is always given attention to balance the rapid development of current technology. The application of Light Detection and Ranging (LiDAR) technology in the mapping industry opens up a wide dimension of discussion involving industry users as well as academics. LiDAR technology is now a common method for faster and higher quality topographic data collection than conventional topographic data collection methods. Observation data that is generally in the form of high-density point (point cloud) can also be applied in various uses, especially in the field of mapping and terrain analysis. Therefore, this paper will discuss related LiDAR technology including basic information or principles of LiDAR technology, the latest developments of LiDAR methods, and work processes involved from the point of view of the Department of Survey and Mapping Malaysia (JUPEM).


2021 ◽  
Vol 13 (7) ◽  
pp. 1261
Author(s):  
Riccardo Roncella ◽  
Nazarena Bruno ◽  
Fabrizio Diotri ◽  
Klaus Thoeni ◽  
Anna Giacomini

Digital surface models (DSM) have become one of the main sources of geometrical information for a broad range of applications. Image-based systems typically rely on passive sensors which can represent a strong limitation in several survey activities (e.g., night-time monitoring, underground survey and night surveillance). However, recent progresses in sensor technology allow very high sensitivity which drastically improves low-light image quality by applying innovative noise reduction techniques. This work focuses on the performances of night-time photogrammetric systems devoted to the monitoring of rock slopes. The study investigates the application of different camera settings and their reliability to produce accurate DSM. A total of 672 stereo-pairs acquired with high-sensitivity cameras (Nikon D800 and D810) at three different testing sites were considered. The dataset includes different camera configurations (ISO speed, shutter speed, aperture and image under-/over-exposure). The use of image quality assessment (IQA) methods to evaluate the quality of the images prior to the 3D reconstruction is investigated. The results show that modern high-sensitivity cameras allow the reconstruction of accurate DSM in an extreme low-light environment and, exploiting the correct camera setup, achieving comparable results to daylight acquisitions. This makes imaging sensors extremely versatile for monitoring applications at generally low costs.


2018 ◽  
Vol 10 (11) ◽  
pp. 1744 ◽  
Author(s):  
Kristen Splinter ◽  
Mitchell Harley ◽  
Ian Turner

Narrabeen-Collaroy Beach, located on the Northern Beaches of Sydney along the Pacific coast of southeast Australia, is one of the longest continuously monitored beaches in the world. This paper provides an overview of the evolution and international scientific impact of this long-term beach monitoring program, from its humble beginnings over 40 years ago using the rod and tape measure Emery field survey method; to today, where the application of remote sensing data collection including drones, satellites and crowd-sourced smartphone images, are now core aspects of this continuing and much expanded monitoring effort. Commenced in 1976, surveying at this beach for the first 30 years focused on in-situ methods, whereby the growing database of monthly beach profile surveys informed the coastal science community about fundamental processes such as beach state evolution and the role of cross-shore and alongshore sediment transport in embayment morphodynamics. In the mid-2000s, continuous (hourly) video-based monitoring was the first application of routine remote sensing at the site, providing much greater spatial and temporal resolution over the traditional monthly surveys. This implementation of video as the first of a now rapidly expanding range of remote sensing tools and techniques also facilitated much wider access by the international research community to the continuing data collection program at Narrabeen-Collaroy. In the past decade the video-based data streams have formed the basis of deeper understanding into storm to multi-year response of the shoreline to changing wave conditions and also contributed to progress in the understanding of estuary entrance dynamics. More recently, ‘opportunistic’ remote sensing platforms such as surf cameras and smartphones have also been used for image-based shoreline data collection. Commencing in 2011, a significant new focus for the Narrabeen-Collaroy monitoring program shifted to include airborne lidar (and later Unmanned Aerial Vehicles (UAVs)), in an enhanced effort to quantify the morphological impacts of individual storm events, understand key drivers of erosion, and the placing of these observations within their broader regional context. A fixed continuous scanning lidar installed in 2014 again improved the spatial and temporal resolution of the remote-sensed data collection, providing new insight into swash dynamics and the often-overlooked processes of post-storm beach recovery. The use of satellite data that is now readily available to all coastal researchers via Google Earth Engine continues to expand the routine data collection program and provide key insight into multi-decadal shoreline variability. As new and expanding remote sensing technologies continue to emerge, a key lesson from the long-term monitoring at Narrabeen-Collaroy is the importance of a regular re-evaluation of what data is most needed to progress the science.


Author(s):  
Leena Matikainen ◽  
Juha Hyyppä ◽  
Paula Litkey

During the last 20 years, airborne laser scanning (ALS), often combined with multispectral information from aerial images, has shown its high feasibility for automated mapping processes. Recently, the first multispectral airborne laser scanners have been launched, and multispectral information is for the first time directly available for 3D ALS point clouds. This article discusses the potential of this new single-sensor technology in map updating, especially in automated object detection and change detection. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from a random forests analysis suggest that the multispectral intensity information is useful for land cover classification, also when considering ground surface objects and classes, such as roads. An out-of-bag estimate for classification error was about 3% for separating classes asphalt, gravel, rocky areas and low vegetation from each other. For buildings and trees, it was under 1%. According to feature importance analyses, multispectral features based on several channels were more useful that those based on one channel. Automatic change detection utilizing the new multispectral ALS data, an old digital surface model (DSM) and old building vectors was also demonstrated. Overall, our first analyses suggest that the new data are very promising for further increasing the automation level in mapping. The multispectral ALS technology is independent of external illumination conditions, and intensity images produced from the data do not include shadows. These are significant advantages when the development of automated classification and change detection procedures is considered.


2019 ◽  
Vol 70 (3) ◽  
pp. 131-145 ◽  
Author(s):  
Raimondo Gallo ◽  
Gianluca Ristorto ◽  
Alex Bojeri ◽  
Nadia Zorzi ◽  
Gabriele Daglio ◽  
...  

Summary The aim of WEQUAL project (WEb service centre for QUALity multidimensional design and tele-operated monitoring of Green Infrastructures) is the development of a system that is able to support a quick environmental monitoring of riparian areas subjected to the realization of new green infrastructures (GI). The Wequal’s idea is to organize a service center able to manage both the Web Platform and the whole data collection and analysis processes. Through a personal account, the final user (designer, technician, researcher) can get access to the service and requires the evaluation of alternatives GI projects. On the Web Platform, a set of algorithms runs in order to calculate, through automatic procedures, all the ecological criteria required to evaluate a quality environmental index that describes the eco-morphological value of the monitored riparian areas. For this aim, the WEQUI index was developed, which uses 15 indicators that are easy to monitor. In this paper, the approach for environmental data collection and the procedures to perform the automatic assessment of two of the ecological criteria are described. For the computation, the implemented algorithms use data including the vegetation indexes, Digital Terrain Model (DTM), Digital Surface Model (DSM) and a 3D point cloud classification. All the raw data are collected by UAVs (Unmanned Aircraft Vehicle) equipped with a 3D Lidar, multispectral camera and RGB camera. Interpreting all the raw data collected by these sensors, using a multi-attribute approach, the WEQUI index is assessed. The computed ecological index is then used to assess the riparian environmental quality at ex-ante and ex-post river stabilization works. This index, integrated with additional not-technical or not-ecological indicators such as investment required, maintenance costs or social acceptance, can be used in multicriteria analyses in order to evaluate the intervention from a wider point of view. The platform is expected to be attractive for GI designers and policy makers by providing a shared environment, which is able to integrate the method of detection and evaluation of complex indexes and a multidimensional evaluation supported by an expert guide.


Author(s):  
R. A. Loberternos ◽  
W. P. Porpetcho ◽  
J. C. A. Graciosa ◽  
R. R. Violanda ◽  
A. G. Diola ◽  
...  

Traditional remote sensing approach for mapping aquaculture ponds typically involves the use of aerial photography and high resolution images. The current study demonstrates the use of object-based image processing and analyses of LiDAR-data-generated derivative images with 1-meter resolution, namely: CHM (canopy height model) layer, DSM (digital surface model) layer, DTM (digital terrain model) layer, Hillshade layer, Intensity layer, NumRet (number of returns) layer, and Slope layer. A Canny edge detection algorithm was also performed on the Hillshade layer in order to create a new image (Canny layer) with more defined edges. These derivative images were then used as input layers to perform a multi-resolution segmentation algorithm best fit to delineate the aquaculture ponds. In order to extract the aquaculture pond feature, three major classes were identified for classification, including land, vegetation and water. Classification was first performed by using assign class algorithm to classify Flat Surfaces to segments with mean Slope values of 10 or lower. Out of these Flat Surfaces, assign class algorithm was then performed to determine Water feature by using a threshold value of 63.5. The segments identified as Water were then merged together to form larger bodies of water which comprises the aquaculture ponds. The present study shows that LiDAR data coupled with object-based classification can be an effective approach for mapping coastal aquaculture ponds. The workflow currently presented can be used as a model to map other areas in the Philippines where aquaculture ponds exist.


Author(s):  
Goziyah Goziyah ◽  
Harninda Rizka Insani

The objective of this research was to provide an understanding of cohesion and coherence in the newspaper Bisnis Indonesia with title Kemenperin Jamin Serap Garam Rakyat. The research method used is the method of content analysis with a qualitative approach. Data collection techniques using documentation techniques. Data analysis techniques begin with data reduction, data tabulation, data classification, data interpretation, and conclusions. The results show that in the news text in the newspaper Bisnis Indonesia there is a more dominant cohesion found pronouns, ellipsis, and conjunctions or hyphens. Then, the coherence that is found is the relationship of contradictions, general specific relationships, comparison relationships, causal relationships, review relationships, and referral relationships. Keywords: cohesion, coherence, newspapers


2021 ◽  
Vol 3 (1) ◽  
pp. 16-38
Author(s):  
Farah Nur ◽  
Dedy Eko Aryanto ◽  
Nelita Indah Islami

This study aims to explain and analyze the segmentation of syllables and phonemes in command sentences using phonological studies. This research uses descriptive quantitative method. The data collection technique used was the analysis of data interpretation and the data source used came from an informant who then segmented the voice of the informant using the PRAAT application. The six data discussed in this study are: (1) images of the annotation results of sound segmentation using the PRAAT application, (2) the number of words, syllables, and syllable patterns in each sentence, (3) description of the number of phonemes in each sentence, (4) sentence duration , (5) the whole syllable, and (6) the whole phoneme. The results of this study indicate that a sentence can be analyzed and segmented using technology-based applications, from this analysis can be seen the duration, syllable, phonemes, and words. Segmentsai analysis is carried out in accordance with what the informant pronounces, so that the sound produced by the informant must be clear.


Sign in / Sign up

Export Citation Format

Share Document