scholarly journals DEVELOPMENT OF AN ALL-PURPOSE FREE PHOTOGRAMMETRIC TOOL

Author(s):  
D. González-Aguilera ◽  
L. López-Fernández ◽  
P. Rodriguez-Gonzalvez ◽  
D. Guerrero ◽  
D. Hernandez-Lopez ◽  
...  

Photogrammetry is currently facing some challenges and changes mainly related to automation, ubiquitous processing and variety of applications. Within an ISPRS Scientific Initiative a team of researchers from USAL, UCLM, FBK and UNIBO have developed an open photogrammetric tool, called GRAPHOS (inteGRAted PHOtogrammetric Suite). GRAPHOS allows to obtain dense and metric 3D point clouds from terrestrial and UAV images. It encloses robust photogrammetric and computer vision algorithms with the following aims: (i) increase automation, allowing to get dense 3D point clouds through a friendly and easy-to-use interface; (ii) increase flexibility, working with any type of images, scenarios and cameras; (iii) improve quality, guaranteeing high accuracy and resolution; (iv) preserve photogrammetric reliability and repeatability. Last but not least, GRAPHOS has also an educational component reinforced with some didactical explanations about algorithms and their performance. The developments were carried out at different levels: GUI realization, image pre-processing, photogrammetric processing with weight parameters, dataset creation and system evaluation. <br><br> The paper will present in detail the developments of GRAPHOS with all its photogrammetric components and the evaluation analyses based on various image datasets. GRAPHOS is distributed for free for research and educational needs.

Author(s):  
D. González-Aguilera ◽  
L. López-Fernández ◽  
P. Rodriguez-Gonzalvez ◽  
D. Guerrero ◽  
D. Hernandez-Lopez ◽  
...  

Photogrammetry is currently facing some challenges and changes mainly related to automation, ubiquitous processing and variety of applications. Within an ISPRS Scientific Initiative a team of researchers from USAL, UCLM, FBK and UNIBO have developed an open photogrammetric tool, called GRAPHOS (inteGRAted PHOtogrammetric Suite). GRAPHOS allows to obtain dense and metric 3D point clouds from terrestrial and UAV images. It encloses robust photogrammetric and computer vision algorithms with the following aims: (i) increase automation, allowing to get dense 3D point clouds through a friendly and easy-to-use interface; (ii) increase flexibility, working with any type of images, scenarios and cameras; (iii) improve quality, guaranteeing high accuracy and resolution; (iv) preserve photogrammetric reliability and repeatability. Last but not least, GRAPHOS has also an educational component reinforced with some didactical explanations about algorithms and their performance. The developments were carried out at different levels: GUI realization, image pre-processing, photogrammetric processing with weight parameters, dataset creation and system evaluation. <br><br> The paper will present in detail the developments of GRAPHOS with all its photogrammetric components and the evaluation analyses based on various image datasets. GRAPHOS is distributed for free for research and educational needs.


Author(s):  
Bisheng Yang ◽  
Yuan Liu ◽  
Fuxun Liang ◽  
Zhen Dong

High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.


Author(s):  
S. Rhee ◽  
T. Kim

3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.


2021 ◽  
Vol 13 (16) ◽  
pp. 3152
Author(s):  
Xiangxiong Kong

Cliff monitoring is essential to stakeholders for their decision-making in maintaining a healthy coastal environment. Recently, photogrammetry-based technology has shown great successes in cliff monitoring. However, many methods to date require georeferencing efforts by either measuring geographic coordinates of the ground control points (GCPs) or using global navigation satellite system (GNSS)-enabled unmanned aerial vehicles (UAVs), significantly increasing the implementation costs. In this study, we proposed an alternative cliff monitoring methodology that does not rely on any georeferencing efforts but can still yield reliable monitoring results. To this end, we treated 3D point clouds of the cliff from different periods as geometric datasets and further aligned them into the same coordinate system using a rigid registration protocol. We examined the performance of our approach through a few small-scale experiments on a rock sample as well as a full-scale field validation on a coastal cliff. The findings of this study would be particularly valuable for underserved coastal communities, where high-end GPS devices and GIS specialists may not be easily accessible resources.


Author(s):  
Bisheng Yang ◽  
Yuan Liu ◽  
Fuxun Liang ◽  
Zhen Dong

High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.


2019 ◽  
Vol 11 (9) ◽  
pp. 1040 ◽  
Author(s):  
Haiqing He ◽  
Junchao Zhou ◽  
Min Chen ◽  
Ting Chen ◽  
Dajun Li ◽  
...  

Automatic building extraction using a single data type, either 2D remotely-sensed images or light detection and ranging 3D point clouds, remains insufficient to accurately delineate building outlines for automatic mapping, despite active research in this area and the significant progress which has been achieved in the past decade. This paper presents an effective approach to extracting buildings from Unmanned Aerial Vehicle (UAV) images through the incorporation of superpixel segmentation and semantic recognition. A framework for building extraction is constructed by jointly using an improved Simple Linear Iterative Clustering (SLIC) algorithm and Multiscale Siamese Convolutional Networks (MSCNs). The SLIC algorithm, improved by additionally imposing a digital surface model for superpixel segmentation, namely 6D-SLIC, is suited for building boundary detection under building and image backgrounds with similar radiometric signatures. The proposed MSCNs, including a feature learning network and a binary decision network, are used to automatically learn a multiscale hierarchical feature representation and detect building objects under various complex backgrounds. In addition, a gamma-transform green leaf index is proposed to truncate vegetation superpixels for further processing to improve the robustness and efficiency of building detection, the Douglas–Peucker algorithm and iterative optimization are used to eliminate jagged details generated from small structures as a result of superpixel segmentation. In the experiments, the UAV datasets, including many buildings in urban and rural areas with irregular shapes and different heights and that are obscured by trees, are collected to evaluate the proposed method. The experimental results based on the qualitative and quantitative measures confirm the effectiveness and high accuracy of the proposed framework relative to the digitized results. The proposed framework performs better than state-of-the-art building extraction methods, given its higher values of recall, precision, and intersection over Union (IoU).


Author(s):  
S. Rhee ◽  
T. Kim

3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.


Author(s):  
E. Maset ◽  
A. Fusiello ◽  
F. Crosilla ◽  
R. Toldo ◽  
D. Zorzetto

This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.


2019 ◽  
Vol 31 (6) ◽  
pp. 844-850 ◽  
Author(s):  
Kevin T. Huang ◽  
Michael A. Silva ◽  
Alfred P. See ◽  
Kyle C. Wu ◽  
Troy Gallerani ◽  
...  

OBJECTIVERecent advances in computer vision have revolutionized many aspects of society but have yet to find significant penetrance in neurosurgery. One proposed use for this technology is to aid in the identification of implanted spinal hardware. In revision operations, knowing the manufacturer and model of previously implanted fusion systems upfront can facilitate a faster and safer procedure, but this information is frequently unavailable or incomplete. The authors present one approach for the automated, high-accuracy classification of anterior cervical hardware fusion systems using computer vision.METHODSPatient records were searched for those who underwent anterior-posterior (AP) cervical radiography following anterior cervical discectomy and fusion (ACDF) at the authors’ institution over a 10-year period (2008–2018). These images were then cropped and windowed to include just the cervical plating system. Images were then labeled with the appropriate manufacturer and system according to the operative record. A computer vision classifier was then constructed using the bag-of-visual-words technique and KAZE feature detection. Accuracy and validity were tested using an 80%/20% training/testing pseudorandom split over 100 iterations.RESULTSA total of 321 total images were isolated containing 9 different ACDF systems from 5 different companies. The correct system was identified as the top choice in 91.5% ± 3.8% of the cases and one of the top 2 or 3 choices in 97.1% ± 2.0% and 98.4 ± 13% of the cases, respectively. Performance persisted despite the inclusion of variable sizes of hardware (i.e., 1-level, 2-level, and 3-level plates). Stratification by the size of hardware did not improve performance.CONCLUSIONSA computer vision algorithm was trained to classify at least 9 different types of anterior cervical fusion systems using relatively sparse data sets and was demonstrated to perform with high accuracy. This represents one of many potential clinical applications of machine learning and computer vision in neurosurgical practice.


Sign in / Sign up

Export Citation Format

Share Document