scholarly journals DIRECT IMAGE-TO-GEOMETRY REGISTRATION USING MOBILE SENSOR DATA

Author(s):  
C. Kehl ◽  
S. J. Buckley ◽  
R. L. Gawthorpe ◽  
I. Viola ◽  
J. A. Howell

Adding supplementary texture and 2D image-based annotations to 3D surface models is a useful next step for domain specialists to make use of photorealistic products of laser scanning and photogrammetry. This requires a registration between the new camera imagery and the model geometry to be solved, which can be a time-consuming task without appropriate automation. The increasing availability of photorealistic models, coupled with the proliferation of mobile devices, gives users the possibility to complement their models in real time. Modern mobile devices deliver digital photographs of increasing quality, as well as on-board sensor data, which can be used as input for practical and automatic camera registration procedures. Their familiar user interface also improves manual registration procedures. This paper introduces a fully automatic pose estimation method using the on-board sensor data for initial exterior orientation, and feature matching between an acquired photograph and a synthesised rendering of the orientated 3D scene as input for fine alignment. The paper also introduces a user-friendly manual camera registration- and pose estimation interface for mobile devices, based on existing surface geometry and numerical optimisation methods. The article further assesses the automatic algorithm’s accuracy compared to traditional methods, and the impact of computational- and environmental parameters. Experiments using urban and geological case studies show a significant sensitivity of the automatic procedure to the quality of the initial mobile sensor values. Changing natural lighting conditions remain a challenge for automatic pose estimation techniques, although progress is presented here. Finally, the automatically-registered mobile images are used as the basis for adding user annotations to the input textured model.

Author(s):  
C. Kehl ◽  
S. J. Buckley ◽  
R. L. Gawthorpe ◽  
I. Viola ◽  
J. A. Howell

Adding supplementary texture and 2D image-based annotations to 3D surface models is a useful next step for domain specialists to make use of photorealistic products of laser scanning and photogrammetry. This requires a registration between the new camera imagery and the model geometry to be solved, which can be a time-consuming task without appropriate automation. The increasing availability of photorealistic models, coupled with the proliferation of mobile devices, gives users the possibility to complement their models in real time. Modern mobile devices deliver digital photographs of increasing quality, as well as on-board sensor data, which can be used as input for practical and automatic camera registration procedures. Their familiar user interface also improves manual registration procedures. This paper introduces a fully automatic pose estimation method using the on-board sensor data for initial exterior orientation, and feature matching between an acquired photograph and a synthesised rendering of the orientated 3D scene as input for fine alignment. The paper also introduces a user-friendly manual camera registration- and pose estimation interface for mobile devices, based on existing surface geometry and numerical optimisation methods. The article further assesses the automatic algorithm’s accuracy compared to traditional methods, and the impact of computational- and environmental parameters. Experiments using urban and geological case studies show a significant sensitivity of the automatic procedure to the quality of the initial mobile sensor values. Changing natural lighting conditions remain a challenge for automatic pose estimation techniques, although progress is presented here. Finally, the automatically-registered mobile images are used as the basis for adding user annotations to the input textured model.


Author(s):  
Juyuan Yin ◽  
Jian Sun ◽  
Keshuang Tang

Queue length estimation is of great importance for signal performance measures and signal optimization. With the development of connected vehicle technology and mobile internet technology, using mobile sensor data instead of fixed detector data to estimate queue length has become a significant research topic. This study proposes a queue length estimation method using low-penetration mobile sensor data as the only input. The proposed method is based on the combination of Kalman Filtering and shockwave theory. The critical points are identified from raw spatiotemporal points and allocated to different cycles for subsequent estimation. To apply the Kalman Filter, a state-space model with two state variables and the system noise determined by queue-forming acceleration is established, which can characterize the stochastic property of queue forming. The Kalman Filter with joining points as measurement input recursively estimates real-time queue lengths; on the other hand, queue-discharging waves are estimated with a line fitted to leaving points. By calculating the crossing point of the queue-forming wave and the queue-discharging wave of a cycle, the maximum queue length is also estimated. A case study with DiDi mobile sensor data and ground truth maximum queue lengths at Huanggang-Fuzhong intersection, Shenzhen, China, shows that the mean absolute percentage error is only 11.2%. Moreover, the sensitivity analysis shows that the proposed estimation method achieves much better performance than the classical linear regression method, especially in extremely low penetration rates.


2021 ◽  
Author(s):  
Dengqing Tang ◽  
Lincheng Shen ◽  
Xiaojiao Xiang ◽  
Han Zhou ◽  
Tianjiang Hu

<p>We propose a learning-type anchors-driven real-time pose estimation method for the autolanding fixed-wing unmanned aerial vehicle (UAV). The proposed method enables online tracking of both position and attitude by the ground stereo vision system in the Global Navigation Satellite System denied environments. A pipeline of convolutional neural network (CNN)-based UAV anchors detection and anchors-driven UAV pose estimation are employed. To realize robust and accurate anchors detection, we design and implement a Block-CNN architecture to reduce the impact of the outliers. With the basis of the anchors, monocular and stereo vision-based filters are established to update the UAV position and attitude. To expand the training dataset without extra outdoor experiments, we develop a parallel system containing the outdoor and simulated systems with the same configuration. Simulated and outdoor experiments are performed to demonstrate the remarkable pose estimation accuracy improvement compared with the conventional Perspective-N-Points solution. In addition, the experiments also validate the feasibility of the proposed architecture and algorithm in terms of the accuracy and real-time capability requirements for fixed-wing autolanding UAVs.</p>


Author(s):  
Xuanke You ◽  
Lan Zhang ◽  
Haikuo Yu ◽  
Mu Yuan ◽  
Xiang-Yang Li

Leveraging sensor data of mobile devices and wearables, activity detection is a critical task in various intelligent systems. Most recent work train deep models to improve the accuracy of recognizing specific human activities, which, however, rely on specially collected and accurately labeled sensor data. It is labor-intensive and time-consuming to collect and label large-scale sensor data that cover various people, mobile devices, and environments. In production scenarios, on the one hand, the lack of accurately labeled sensor data poses significant challenges to the detection of key activities; on the other hand, massive continuously generated sensor data attached with inexact information is severely underutilized. For example, in an on-demand food delivery system, detecting the key activity that the rider gets off his/her motorcycle to hand food over to the customer is essential for predicting the exact delivery time. Nevertheless, the system has only the raw sensor data and the clicking "finish delivery" events, which are highly relevant to the key activity but very inexact, since different riders may click "finish delivery" at any time in the last-mile delivery. Without exact labels of key activities, in this work, we propose a system, named KATN, to detect the exact regions of key activities based on inexact supervised learning. We design a novel siamese key activity attention network (SAN) to learn both discriminative and detailed sequential features of the key activity under the supervision of inexact labels. By interpreting the behaviors of SAN, an exact time estimation method is devised. We also provide a personal adaptation mechanism to cope with diverse habits of users. Extensive experiments on both public datasets and data from a real-world food delivery system testify the significant advantages of our design. Furthermore, based on KATN, we propose a novel user-friendly annotation mechanism to facilitate the annotation of large-scale sensor data for a wide range of applications.


Author(s):  
Zhiming Chen ◽  
Lei Li ◽  
Yunhua Wu ◽  
Bing Hua ◽  
Kang Niu

Purpose On-orbit service technology is one of the key technologies of space manipulation activities such as spacecraft life extension, fault spacecraft capture, on-orbit debris removal and so on. It is known that the failure satellites, space debris and enemy spacecrafts in space are almost all non-cooperative targets. Relatively accurate pose estimation is critical to spatial operations, but also a recognized technical difficulty because of the undefined prior information of non-cooperative targets. With the rapid development of laser radar, the application of laser scanning equipment is increasing in the measurement of non-cooperative targets. It is necessary to research a new pose estimation method for non-cooperative targets based on 3D point cloud. The paper aims to discuss these issues. Design/methodology/approach In this paper, a method based on the inherent characteristics of a spacecraft is proposed for estimating the pose (position and attitude) of the spatial non-cooperative target. First, we need to preprocess the obtained point cloud to reduce noise and improve the quality of data. Second, according to the features of the satellite, a recognition system used for non-cooperative measurement is designed. The components which are common in the configuration of satellite are chosen as the recognized object. Finally, based on the identified object, the ICP algorithm is used to calculate the pose between two frames of point cloud in different times to finish pose estimation. Findings The new method enhances the matching speed and improves the accuracy of pose estimation compared with traditional methods by reducing the number of matching points. The recognition of components on non-cooperative spacecraft directly contributes to the space docking, on-orbit capture and relative navigation. Research limitations/implications Limited to the measurement distance of the laser radar, this paper considers the pose estimation for non-cooperative spacecraft in the close range. Practical implications The pose estimation method for non-cooperative spacecraft in this paper is mainly applied to close proximity space operations such as final rendezvous phase of spacecraft or ultra-close approaching phase of target capture. The system can recognize components needed to be capture and provide the relative pose of non-cooperative spacecraft. The method in this paper is more robust compared with the traditional single component recognition method and overall matching method when scanning of laser radar is not complete or the components are blocked. Originality/value This paper introduces a new pose estimation method for non-cooperative spacecraft based on point cloud. The experimental results show that the proposed method can effectively identify the features of non-cooperative targets and track their position and attitude. The method is robust to the noise and greatly improves the speed of pose estimation while guarantee the accuracy.


2021 ◽  
Author(s):  
Dengqing Tang ◽  
Lincheng Shen ◽  
Xiaojiao Xiang ◽  
Han Zhou ◽  
Tianjiang Hu

<p>We propose a learning-type anchors-driven real-time pose estimation method for the autolanding fixed-wing unmanned aerial vehicle (UAV). The proposed method enables online tracking of both position and attitude by the ground stereo vision system in the Global Navigation Satellite System denied environments. A pipeline of convolutional neural network (CNN)-based UAV anchors detection and anchors-driven UAV pose estimation are employed. To realize robust and accurate anchors detection, we design and implement a Block-CNN architecture to reduce the impact of the outliers. With the basis of the anchors, monocular and stereo vision-based filters are established to update the UAV position and attitude. To expand the training dataset without extra outdoor experiments, we develop a parallel system containing the outdoor and simulated systems with the same configuration. Simulated and outdoor experiments are performed to demonstrate the remarkable pose estimation accuracy improvement compared with the conventional Perspective-N-Points solution. In addition, the experiments also validate the feasibility of the proposed architecture and algorithm in terms of the accuracy and real-time capability requirements for fixed-wing autolanding UAVs.</p>


2016 ◽  
Vol 6 (01) ◽  
pp. 5218
Author(s):  
Laxmi Mohandas ◽  
Anju T. R. ◽  
Sarita G. Bhat*

An assortment of redox-active phenazine compounds like pyocyanin with their characteristic blue-green colour are synthesized by Pseudomonas aeruginosa, Gram-negative opportunistic pathogens, which are also considered one of the most commercially valuable microorganisms. In this study, pyocyanin from Pseudomonas aeruginosa BTRY1 from food sample was assessed for its antibiofilm activity by micro titer plate assay against strong biofilm producers belonging to the genera Bacillus, Staphylococcus, Brevibacterium and Micrococcus. Pyocyanin inhibited biofilm activity in very minute concentrations. This was also confirmed by Scanning Electron Microscopy (SEM) and Confocal Laser Scanning Microscopy (CLSM). Both SEM and CLSM helped to visualize the biocontrol of biofilm formation by eight pathogens. The imaging and quantification by CLSM also established the impact of pyocyanin on biofilm-biocontrol mainly in the food industry.


Sign in / Sign up

Export Citation Format

Share Document