Laser Stereo Vision-Based Position and Attitude Detection of Non-Cooperative Target for Space Application

2014 ◽  
Vol 1039 ◽  
pp. 242-250
Author(s):  
Da Wei Tu ◽  
Xu Zhang ◽  
Kai Fei ◽  
Xi Zhang

Vision measurement for non-cooperative targets in space is an essential technique in space counterwork, fragment disposal, satellite on-orbit service, spacecraft rendezvous, because the position and attitude of the target aircraft or the object should be detected first of all in the process. The 2D passive camera loses the depth information and can not measure the position and attitude of non-cooperative target. Several kinds of range imaging methods are alternatives. The traditional triangulation method can provide very high precision range measurement at close range but the nature of the triangulation geometry means that the uncertainty grows when the range increases. Laser radar (LIDAR) based on TOF (time of flight) or phase difference principle is suitable for middle and long range, but it can not fit for short range. A novel structure system is put forward, in which a so-called synchronous scanning triangulation method is adopted combining a LIDAR system. The synchronous scanning triangulation system plays a role at the range from 0.5m to 10m for object’s attitude, and the LIDAR system plays a role at the range from 10m to 200m for object’s position (direction and range).They are merged into one path, and do not influence each other because of using two different wavelengths respectively. This mechanism makes the system more compact and less weight. The system performances, such as the measurement range and precision, are analyzed according to the system parameters. The principle prototype is designed and established, and the experimental results confirm that its performance is promising and can satisfy the requirement for space application.

Author(s):  
Shengjun Tang ◽  
Qing Zhu ◽  
Wu Chen ◽  
Walid Darwish ◽  
Bo Wu ◽  
...  

RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method


Author(s):  
Mowen Xue ◽  
Xudong Li ◽  
Hongzhi Jiang ◽  
Huijie Zhao

2014 ◽  
Vol 6 ◽  
pp. 758679 ◽  
Author(s):  
He Gao ◽  
Fuqiang Zhou ◽  
Bin Peng ◽  
Yexin Wang ◽  
Haishu Tan

Structured-light three-dimensional (3D) vision measurement is currently one of the most common approaches to obtain 3D surface data. However, the existing structured-light scanning measurement systems are primarily constructed on the basis of single sensor, which inevitably generates three obvious problems: limited measurement range, blind measurement area, and low scanning efficiency. To solve these problems, we developed a novel 3D wide FOV scanning measurement system which adopted two multiline structured-light sensors. Each sensor is composed of a digital CCD camera and three line-structured-light projectors. During the measurement process, the measured object is scanned by the two sensors from two different angles at a certain speed. Consequently, the measurement range is expanded and the blind measurement area is reduced. More importantly, since six light stripes are simultaneously projected on the object surface, the scanning efficiency is greatly improved. The Multiline Structured-light Sensors Scanning Measurement System (MSSS) is calibrated on site by a 2D pattern. The experimental results show that the RMS errors of the system for calibration and measurement are less than 0.092 mm and 0.168 mm, respectively, which proves that the MSSS is applicable for obtaining 3D object surface with high efficiency and accuracy.


1994 ◽  
Vol 04 (03) ◽  
pp. 675-691
Author(s):  
MARK JEFFERY ◽  
E.J. D’ANGELO

The Maxwell-Bloch equations describing the transverse modes of a large aspect ratio homogeneously broadened single-longitudinal-mode laser are solved numerically. The solutions are visualized by color-coding the intensity and phase of the transverse electric field at each time step and a video showing the detailed time evolution has been made. As the gain is increased optical vortices, or defects, are observed. These vortices interact and for very high gain optical turbulence exists. The orbits of the single- and two-defect solutions are analyzed and the effective force law coupling the defects is numerically calculated. The best fit force is a modified Bessel function which is proportional to r−1/2 exp (−r/α) at large distances and ln(α/r) at close range where α is approximately 2. This interaction force law for optical defects is similar to the force law between Abrikosov vortices in a superconductor.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 272
Author(s):  
Jacek Marcinkiewicz ◽  
Mikołaj Spadło ◽  
Zaneta Staszak ◽  
Jarosław Selech

The article lays out the methodology for shaping the design features of a strain gauge transducer, which would make it possible to study forces and torques generated during the operation of symmetrical seeder coulters. The transducers that have been known up until now cannot be used to determine forces and torques for the coulter configuration adopted by the authors. For this purpose, the design of the transducer in the form of strain gauge beams was used to ensure the accumulated stress concentration. A detailed design was presented in the form of a 3D model, along with a transducer body manufactured on its basis, including the method for arranging the strain gauges thereon. Moreover, the article discusses the methodology of processing voltage signals obtained from component loads. Particular attention was paid to the methodology of determining the load capacity of the transducer structure, based on finite element method (FEM). This made it possible to choose a transducer geometry providing the expected measurement sensitivity and, at the same time, maintaining the best linearity of indications, insignificant coupling error, and a broad measurement range. The article also presents the characteristics of the transducer calibration process and a description of a special test stand designed for this purpose. The transducer developed within the scope of this work provides very high precision of load spectrum reads, thus enabling the performance of a detailed fatigue analysis of the tested designs. Additionally, the versatility it offers makes it easy to adapt to many existing test stands, which is a significant advantage because it eliminates the need to build new test stands.


2021 ◽  
Vol 13 (18) ◽  
pp. 3709
Author(s):  
Zifa Zhu ◽  
Yuebo Ma ◽  
Rujin Zhao ◽  
Enhai Liu ◽  
Sikang Zeng ◽  
...  

Monocular vision is one of the most commonly used noncontact six-degrees-of-freedom (6-DOF) pose estimation methods. However, the large translational DOF measurement error along the optical axis of the camera is one of its main weaknesses, which greatly limits the measurement accuracy of monocular vision measurement. In this paper, we propose a novel monocular camera and 1D laser rangefinder (LRF) fusion strategy to overcome this weakness and design a remote and ultra-high precision cooperative targets 6-DOF pose estimation sensor. Our approach consists of two modules: (1) a feature fusion module that precisely fuses the initial pose estimated from the camera and the depth information obtained by the LRF. (2) An optimization module that optimizes pose and system parameters. The performance of our proposed 6-DOF pose estimation method is validated using simulations and real-world experiments. The experimental results show that our fusion strategy can accurately integrate the information of the camera and the LRF. Further optimization carried out on this basis effectively reduces the measurement error of monocular vision 6-DOF pose measurement. The experimental results obtained from a prototype show that its translational and rotational DOF measurement accuracy can reach up to 0.02 mm and 15″, respectively, at a distance of 10 m.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Michael T. Benson ◽  
Harish Sathishchandra ◽  
Garrett M. Clayton ◽  
Sean B. Andersson

Abstract In this article, a compressive sensing-based reconstruction algorithm is applied to data acquired from a nodding multibeam Lidar system following a Lissajous-like trajectory. Multibeam Lidar systems provide 3D depth information of the environment, but the vertical resolution of these devices may be insufficient in many applications. To mitigate this issue, the Lidar can be nodded to obtain higher vertical resolution at the cost of increased scan time. Using Lissajous-like nodding trajectories allows for the trade-off between scan time and horizontal and vertical resolutions through the choice of scan parameters. These patterns also naturally subsample the imaged area. In this article, a compressive sensing-based reconstruction algorithm is applied to the data collected during a relatively fast and therefore low-resolution Lissajous-like scan. Experiments and simulations show the feasibility of this method and compare the reconstructions to those made using simple nearest-neighbor interpolation.


2011 ◽  
Vol 42 (1) ◽  
pp. 10-19 ◽  
Author(s):  
Giuliano Di Baldassarre ◽  
Pierluigi Claps

Several hydrological studies have shown that river discharge records are affected by significant uncertainty. This uncertainty is expected to be very high for river flow data referred to flood events, when the stage–discharge rating curve is extrapolated far beyond the measurement range. This study examines the standard methodologies for the construction and extrapolation of rating curves to extreme flow depths and shows the need of proper approaches to reduce the uncertainty of flood discharge data. To this end, a comprehensive analysis is performed on a 16km reach of the River Po (Italy) where five hydraulic models (HEC-RAS) were built. The results of five topographical surveys conducted during the last 50 years are used as geometric input. The application demonstrates that hydraulically built stage–discharge curves for the five cases differ only for ordinary flows, so that a common rating curve for flood discharges can be derived. This result confirms the validity of statistical approaches to the estimation of the so-called ‘flood rating curve’, a unique stage–discharge curve based on data of contemporaneous annual maxima of stage and discharge values, which appears insensitive to marginal changes in river geometry.


2020 ◽  
Vol 12 (7) ◽  
pp. 1227
Author(s):  
Liang Mei ◽  
Teng Ma ◽  
Zhen Zhang ◽  
Ruonan Fei ◽  
Kun Liu ◽  
...  

Lidar techniques have been widely employed for atmospheric remote sensing during past decades. However, an important drawback of the traditional atmospheric pulsed lidar technique is the large blind range, typically hundreds of meters, due to incomplete overlap between the transmitter and the receiver, etc. The large blind range prevents the successful retrieval of the near-ground aerosol profile, which is of great significance for both meteorological studies and environmental monitoring. In this work, we have demonstrated a new experimental approach to calibrate the overlap factor of the Mie-scattering pulsed lidar system by employing a collocated Scheimpflug lidar (SLidar) system. A calibration method of the overlap factor has been proposed and evaluated with lidar data measured in different ranges. The overlap factor, experimentally determined by the collocated SLidar system, has also been validated through horizontal comparison measurements. It has been found out that the median overlap factor evaluated by the proposed method agreed very well with the overlap factor obtained by the linear fitting approach with the assumption of homogeneous atmospheric conditions, and the discrepancy was generally less than 10%. Meanwhile, simultaneous measurements employing the SLidar system and the pulsed lidar system have been carried out to extend the measurement range of lidar techniques by gluing the lidar curves measured by the two systems. The profile of the aerosol extinction coefficient from the near surface at around 90 m up to 28 km can be well resolved in a slant measurement geometry during nighttime. This work has demonstrated a great potential of employing the SLidar technique for the calibration of the overlap factor and the extension of the measurement range for pulsed lidar techniques.


2013 ◽  
Vol 336-338 ◽  
pp. 2003-2007
Author(s):  
Zhan Li Li ◽  
Zhou Han

Aiming at the problem of 3D measurement to product in industrial, a vision measurement system with marked targets is researched and developed. The system is a 3D visual system based on a basic principle of close-range photogrammetry. After shooting objects with mark targets by a digital camera, the digital image processing and vision measurement processing are used to detect and match the marked targets in order to locate the geometry precisely. The system achieved the main function of image preprocessing, target center location, feature matching, camera calibration, 3D reconstruction and precision evaluation. Proved by the experiment, the system can reduce the dependence on hardware equipment, measurement environment and professional technology, and obtain stable and precise 3D reconstruction results, which can be applied to the calculation of monitoring point.


Sign in / Sign up

Export Citation Format

Share Document