scholarly journals A FEASIBILITY STUDY ON USING ViSP’S 3D MODEL-BASED TRACKER FOR UAV POSE ESTIMATION IN OUTDOOR ENVIRONMENTS

Author(s):  
J. Li-Chee-Ming ◽  
C. Armenakis

This paper presents a novel application of the Visual Servoing Platform’s (ViSP) for small UAV pose estimation in outdoor environments. Given an initial approximation for the camera position and orientation, or camera pose, ViSP automatically establishes and continuously tracks corresponding features between an image sequence and a 3D wireframe model of the environment. As ViSP has been demonstrated to perform well in small and cluttered indoor environments, this paper explores the application of ViSP for UAV mapping of outdoor landscapes and tracking of large objects (i.e. building models). Our presented experiments demonstrate the data obtainable by the UAV, assess ViSP’s data processing strategies, and evaluate the performance of the tracker.

Author(s):  
J. Li-Chee-Ming ◽  
C. Armenakis

This paper presents a novel application of the Visual Servoing Platform’s (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP’s pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera’s field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP’s pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.


Author(s):  
J. Li-Chee-Ming ◽  
C. Armenakis

This paper presents a novel application of the Visual Servoing Platform’s (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP’s pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera’s field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP’s pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.


Author(s):  
J. Li-Chee-Ming ◽  
C. Armenakis

This paper presents the ongoing development of a small unmanned aerial mapping system (sUAMS) that in the future will track its trajectory and perform 3D mapping in near-real time. As both mapping and tracking algorithms require powerful computational capabilities and large data storage facilities, we propose to use the RoboEarth Cloud Engine (RCE) to offload heavy computation and store data to secure computing environments in the cloud. While the RCE's capabilities have been demonstrated with terrestrial robots in indoor environments, this paper explores the feasibility of using the RCE in mapping and tracking applications in outdoor environments by small UAMS. <br><br> The experiments presented in this work assess the data processing strategies and evaluate the attainable tracking and mapping accuracies using the data obtained by the sUAMS. Testing was performed with an Aeryon Scout quadcopter. It flew over York University, up to approximately 40 metres above the ground. The quadcopter was equipped with a single-frequency GPS receiver providing positioning to about 3 meter accuracies, an AHRS (Attitude and Heading Reference System) estimating the attitude to about 3 degrees, and an FPV (First Person Viewing) camera. Video images captured from the onboard camera were processed using VisualSFM and SURE, which are being reformed as an Application-as-a-Service via the RCE. The 3D virtual building model of York University was used as a known environment to georeference the point cloud generated from the sUAMS' sensor data. The estimated position and orientation parameters of the video camera show increases in accuracy when compared to the sUAMS' autopilot solution, derived from the onboard GPS and AHRS. The paper presents the proposed approach and the results, along with their accuracies.


Author(s):  
Zhenni Wu ◽  
Hengxin Chen ◽  
Bin Fang ◽  
Zihao Li ◽  
Xinrun Chen

With the rapid development of computer technology, building pose estimation combined with Augmented Reality (AR) can play a crucial role in the field of urban planning and architectural design. For example, a virtual building model can be placed into a realistic scenario acquired by a Unmanned Aerial Vehicle (UAV) to visually observe whether the building can integrate well with its surroundings, thus optimizing the design of the building. In the work, we contribute a building dataset for pose estimation named BD3D. To obtain accurate building pose, we use a physical camera which can simulate realistic cameras in Unity3D to simulate UAVs perspective and use virtual building models as objects. We propose a novel neural network that combines MultiBin module with PoseNet architecture to estimate the building pose. Sometimes, the building is symmetry and ambiguity causes its different surfaces to have similar features, making it difficult for CNNs to learn the differential features between the different surfaces. We propose a generalized world coordinate system repositioning strategy to deal with it. We evaluate our network with the strategy on BD3D, and the angle error is reduced to [Formula: see text] from [Formula: see text]. Code and dataset have been made available at: https://github.com/JellyFive/Building-pose-estimation-from-the-perspective-of-UAVs-based-on-CNNs .


Sensors ◽  
2017 ◽  
Vol 17 (11) ◽  
pp. 2469 ◽  
Author(s):  
Gianluca Gennarelli ◽  
Obada Al Khatib ◽  
Francesco Soldovieri

Author(s):  
K. Chaidas ◽  
G. Tataris ◽  
N. Soulakellis

Abstract. In recent years 3D building modelling techniques are commonly used in various domains such as navigation, urban planning and disaster management, mostly confined to visualization purposes. The 3D building models are produced at various Levels of Detail (LOD) in the CityGML standard, that not only visualize complex urban environment but also allows queries and analysis. The aim of this paper is to present the methodology and the results of the comparison among two scenarios of LOD2 building models, which have been generated by the derivate UAS data acquired from two flight campaigns in different altitudes. The study was applied in Vrisa traditional settlement, Lesvos island, Greece, which was affected by a devastating earthquake of Mw = 6.3 on 12th June 2017. Specifically, the two scenarios were created by the results that were derived from two different flight campaigns which were: i) on 12th January 2020 with a flying altitude of 100 m and ii) on 4th February 2020 with a flying altitude of 40 m, both with a nadir camera position. The LOD2 buildings were generated in a part of Vrisa settlement consisted of 80 buildings using the footprints of the buildings, Digital Surface Models (DSMs), a Digital Elevation Model (DEM) and orthophoto maps of the area. Afterwards, a comparison was implemented between the LOD2 buildings of the two different scenarios, with their volumes and their heights. Subsequently, the heights of the LOD2 buildings were compared with the heights of the respective terrestrial laser scanner (TLS) models. Additionally, the roofs of the LOD2 buildings were evaluated through visual inspections. The results showed that the 65 of 80 LOD2 buildings were generated accurately in terms of their heights and roof types for the first scenario and 64 for the second respectively. Finally, the comparison of the results proved that the generation of post-earthquake LOD2 buildings can be achieved with the appropriate UAS data acquired at a flying altitude of 100 m and they are not affected significantly by a lower one altitude.


2014 ◽  
Vol 2014 ◽  
pp. 1-23 ◽  
Author(s):  
Francisco Amorós ◽  
Luis Payá ◽  
Oscar Reinoso ◽  
Walterio Mayol-Cuevas ◽  
Andrew Calway

In this work we present a topological map building and localization system for mobile robots based on global appearance of visual information. We include a comparison and analysis of global-appearance techniques applied to wide-angle scenes in retrieval tasks. Next, we define multiscale analysis, which permits improving the association between images and extracting topological distances. Then, a topological map-building algorithm is proposed. At first, the algorithm has information only of some isolated positions of the navigation area in the form of nodes. Each node is composed of a collection of images that covers the complete field of view from a certain position. The algorithm solves the node retrieval and estimates their spatial arrangement. With these aims, it uses the visual information captured along some routes that cover the navigation area. As a result, the algorithm builds a graph that reflects the distribution and adjacency relations between nodes (map). After the map building, we also propose a route path estimation system. This algorithm takes advantage of the multiscale analysis. The accuracy in the pose estimation is not reduced to the nodes locations but also to intermediate positions between them. The algorithms have been tested using two different databases captured in real indoor environments under dynamic conditions.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5085 ◽  
Author(s):  
Brown

Most human energy budget models consider a person to be approximately cylindrical in shape when estimating or measuring the amount of radiation that they receive in a given environment. Yet, the most commonly used instrument for measuring the amount of radiation received by a person is the globe thermometer. The spherical shape of this instrument was designed to be used indoors where radiation is received approximately equally from all directions. But in outdoor environments, radiation can be strongly directional, making the sphere an inappropriate shape. The international standard for measuring radiation received by a person, the Integral Radiation Measurement (IRM) method, yields a measure of the Mean Radiant Temperature (Tmrt). This method uses radiometers oriented in the four cardinal directions, plus up and down. However, this setup essentially estimates the amount of energy received by a square peg, not a cylinder. This paper identifies the errors introduced by both the sphere and the peg, and introduces a set of two new instrument that can be used to directly measure the amount of radiation received by a vertical cylinder in outdoor environments. The Cylindrical Pyranometer measures the amount of solar radiation received by a vertical cylinder, and the Cylindrical Pyrgeometer measures the amount of terrestrial radiation received. While the globe thermometer is still valid for use in indoor environments, these two new instruments should become the standard for measuring radiation received by people in outdoor environments.


Author(s):  
Yang Chen ◽  
Hanmo Zhang ◽  
Shaoxiong Tian ◽  
Changxin Gao
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document