Addressing pose estimation issues for machine vision based UAV autonomous serial refuelling

2007 ◽  
Vol 111 (1120) ◽  
pp. 389-396 ◽  
Author(s):  
G. Campa ◽  
M. R. Napolitano ◽  
M. Perhinschi ◽  
M. L. Fravolini ◽  
L. Pollini ◽  
...  

Abstract This paper describes the results of an effort on the analysis of the performance of specific ‘pose estimation’ algorithms within a Machine Vision-based approach for the problem of aerial refuelling for unmanned aerial vehicles. The approach assumes the availability of a camera on the unmanned aircraft for acquiring images of the refuelling tanker; also, it assumes that a number of active or passive light sources – the ‘markers’ – are installed at specific known locations on the tanker. A sequence of machine vision algorithms on the on-board computer of the unmanned aircraft is tasked with the processing of the images of the tanker. Specifically, detection and labeling algorithms are used to detect and identify the markers and a ‘pose estimation’ algorithm is used to estimate the relative position and orientation between the two aircraft. Detailed closed-loop simulation studies have been performed to compare the performance of two ‘pose estimation’ algorithms within a simulation environment that was specifically developed for the study of aerial refuelling problems. Special emphasis is placed on the analysis of the required computational effort as well as on the accuracy and the error propagation characteristics of the two methods. The general trade offs involved in the selection of the pose estimation algorithm are discussed. Finally, simulation results are presented and analysed.

Author(s):  
Giampiero Campa ◽  
Marco Mammarella ◽  
Marcello R. Napolitano ◽  
Mario L. Fravolini ◽  
Lorenzo Pollini ◽  
...  

2019 ◽  
Vol 9 (12) ◽  
pp. 2478 ◽  
Author(s):  
Jui-Yuan Su ◽  
Shyi-Chyi Cheng ◽  
Chin-Chun Chang ◽  
Jing-Ming Chen

This paper presents a model-based approach for 3D pose estimation of a single RGB image to keep the 3D scene model up-to-date using a low-cost camera. A prelearned image model of the target scene is first reconstructed using a training RGB-D video. Next, the model is analyzed using the proposed multiple principal analysis to label the viewpoint class of each training RGB image and construct a training dataset for training a deep learning viewpoint classification neural network (DVCNN). For all training images in a viewpoint class, the DVCNN estimates their membership probabilities and defines the template of the class as the one of the highest probability. To achieve the goal of scene reconstruction in a 3D space using a camera, using the information of templates, a pose estimation algorithm follows to estimate the pose parameters and depth map of a single RGB image captured by navigating the camera to a specific viewpoint. Obviously, the pose estimation algorithm is the key to success for updating the status of the 3D scene. To compare with conventional pose estimation algorithms which use sparse features for pose estimation, our approach enhances the quality of reconstructing the 3D scene point cloud using the template-to-frame registration. Finally, we verify the ability of the established reconstruction system on publicly available benchmark datasets and compare it with the state-of-the-art pose estimation algorithms. The results indicate that our approach outperforms the compared methods in terms of the accuracy of pose estimation.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2889
Author(s):  
Laurie Needham ◽  
Murray Evans ◽  
Darren P. Cosker ◽  
Steffi L. Colyer

The ability to accurately and non-invasively measure 3D mass centre positions and their derivatives can provide rich insight into the physical demands of sports training and competition. This study examines a method for non-invasively measuring mass centre velocities using markerless human pose estimation and Kalman smoothing. Marker (Qualysis) and markerless (OpenPose) motion capture data were captured synchronously for sprinting and skeleton push starts. Mass centre positions and velocities derived from raw markerless pose estimation data contained large errors for both sprinting and skeleton pushing (mean ± SD = 0.127 ± 0.943 and −0.197 ± 1.549 m·s−1, respectively). Signal processing methods such as Kalman smoothing substantially reduced the mean error (±SD) in horizontal mass centre velocities (0.041 ± 0.257 m·s−1) during sprinting but the precision remained poor. Applying pose estimation to activities which exhibit unusual body poses (e.g., skeleton pushing) appears to elicit more erroneous results due to poor performance of the pose estimation algorithm. Researchers and practitioners should apply these methods with caution to activities beyond sprinting as pose estimation algorithms may not generalise well to the activity of interest. Retraining the model using activity specific data to produce more specialised networks is therefore recommended.


2015 ◽  
Vol 32 (1) ◽  
pp. 97-115 ◽  
Author(s):  
Jack Elston ◽  
Brian Argrow ◽  
Maciej Stachura ◽  
Doug Weibel ◽  
Dale Lawrence ◽  
...  

AbstractSampling the atmospheric boundary layer with small unmanned aircraft is a difficult task requiring informed selection of sensors and algorithms that are suited to the particular platform and mission. Many factors must be considered during the design process to ensure the desired measurement accuracy and resolution is achieved, as is demonstrated through an examination of previous and current efforts. A taxonomy is developed from these approaches and is used to guide a review of the systems that have been employed to make in situ wind and thermodynamic measurements, along with the campaigns that have employed them. Details about the airframe parameters, estimation algorithms, sensors, and calibration methods are given.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1479 ◽  
Author(s):  
Ren Jin ◽  
Jiaqi Jiang ◽  
Yuhua Qi ◽  
Defu Lin ◽  
Tao Song

With the upsurge in use of Unmanned Aerial Vehicles (UAVs), drone detection and pose estimation by using optical sensors becomes an important research subject in cooperative flight and low-altitude security. The existing technology only obtains the position of the target UAV based on object detection methods. To achieve better adaptability and enhanced cooperative performance, the attitude information of the target drone becomes a key message to understand its state and intention, e.g., the acceleration of quadrotors. At present, most of the object 6D pose estimation algorithms depend on accurate pose annotation or a 3D target model, which costs a lot of human resource and is difficult to apply to non-cooperative targets. To overcome these problems, a quadrotor 6D pose estimation algorithm was proposed in this paper. It was based on keypoints detection (only need keypoints annotation), relational graph network and perspective-n-point (PnP) algorithm, which achieves state-of-the-art performance both in simulation and real scenario. In addition, the inference ability of our relational graph network to the keypoints of four motors was also evaluated. The accuracy and speed were improved significantly compared with the state-of-the-art keypoints detection algorithm.


2019 ◽  
Vol 1 (7) ◽  
pp. 19-23
Author(s):  
S. I. Surkichin ◽  
N. V. Gryazeva ◽  
L. S. Kholupova ◽  
N. V. Bochkova

The article provides an overview of the use of photodynamic therapy for photodamage of the skin. The causes, pathogenesis and clinical manifestations of skin photodamage are considered. The definition, principle of action of photodynamic therapy, including the sources of light used, the classification of photosensitizers and their main characteristics are given. Analyzed studies that show the effectiveness and comparative evaluation in the selection of various light sources and photosensitizing agents for photodynamic therapy in patients with clinical manifestations of photodamage.


Heritage ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 188-197
Author(s):  
Dorukalp Durmus

Light causes damage when it is absorbed by sensitive artwork, such as oil paintings. However, light is needed to initiate vision and display artwork. The dilemma between visibility and damage, coupled with the inverse relationship between color quality and energy efficiency, poses a challenge for curators, conservators, and lighting designers in identifying optimal light sources. Multi-primary LEDs can provide great flexibility in terms of color quality, damage reduction, and energy efficiency for artwork illumination. However, there are no established metrics that quantify the output variability or highlight the trade-offs between different metrics. Here, various metrics related to museum lighting (damage, the color quality of paintings, illuminance, luminous efficacy of radiation) are analyzed using a voxelated 3-D volume. The continuous data in each dimension of the 3-D volume are converted to discrete data by identifying a significant minimum value (unit voxel). Resulting discretized 3-D volumes display the trade-offs between selected measures. It is possible to quantify the volume of the graph by summing unique voxels, which enables comparison of the performance of different light sources. The proposed representation model can be used for individual pigments or paintings with numerous pigments. The proposed method can be the foundation of a damage appearance model (DAM).


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1299
Author(s):  
Honglin Yuan ◽  
Tim Hoogenkamp ◽  
Remco C. Veltkamp

Deep learning has achieved great success on robotic vision tasks. However, when compared with other vision-based tasks, it is difficult to collect a representative and sufficiently large training set for six-dimensional (6D) object pose estimation, due to the inherent difficulty of data collection. In this paper, we propose the RobotP dataset consisting of commonly used objects for benchmarking in 6D object pose estimation. To create the dataset, we apply a 3D reconstruction pipeline to produce high-quality depth images, ground truth poses, and 3D models for well-selected objects. Subsequently, based on the generated data, we produce object segmentation masks and two-dimensional (2D) bounding boxes automatically. To further enrich the data, we synthesize a large number of photo-realistic color-and-depth image pairs with ground truth 6D poses. Our dataset is freely distributed to research groups by the Shape Retrieval Challenge benchmark on 6D pose estimation. Based on our benchmark, different learning-based approaches are trained and tested by the unified dataset. The evaluation results indicate that there is considerable room for improvement in 6D object pose estimation, particularly for objects with dark colors, and photo-realistic images are helpful in increasing the performance of pose estimation algorithms.


Sign in / Sign up

Export Citation Format

Share Document