scholarly journals A novel method of 3D motion object’s position perception

2018 ◽  
Vol 246 ◽  
pp. 03020
Author(s):  
Tan Wei ◽  
Xuan Liu ◽  
Chen Yi ◽  
Erfu Yang

With the development of industrial automation, location measurement of 3D objects is becoming more and more important, especially as it can provide necessary positional parameters for the manipulator to grasp the object accurately. In view of the disabled object which is in widespread use currently, its image is captured to obtain positional parameters and transmitted to manipulators in industry. The above process is delayed, affecting the work efficiency of the manipulator. A method for calculating the position information of target object in motion is proposed. This method uses monocular vision technology to track 3D moving objects,then uses contour sorting method to extract the minimum constrained contour rectangle, and combines the video alignment technology to realize the tracking. Thus, the measurement error is reduced. The experimental results and analysis show that the adopted measurement method is effective.

2020 ◽  
Vol 29 (07n08) ◽  
pp. 2040011
Author(s):  
Zhongsheng Wang ◽  
Yufeng Lai ◽  
Sen Yang ◽  
Jiaqiong Gao

With the continuous development of computer vision technology and the continuous upgrading of digital imaging equipment, image depth measurement method is widely used in the fields of intelligent robotics, traffic assistance, three-dimensional modeling and three dimensional video production. The following are the drawbacks of the traditional depth information measurement method: the operation is complex, the cost is high, and the measuring equipment occupies a large space and the load. In this paper, based on the Harris-SIFT corner detection algorithm, a technique is proposed to measure the absolute depth information of the object in the image using monocular vision. First of all, after the monocular camera is used to obtain the image of the target object, the image segmentation algorithm based on the LBF model is used to preprocess the image. Then, Harris algorithm in multi-scale space and SFIT algorithm to reconstruct feature descriptors are used to extract feature information in the image. Finally, by comparing the feature information between image groups, the depth information of target object is calculated by using the formula of convex hull principle and camera imaging principle. The test platform is applied to carry out measurement tests for different depth measurement methods, and the actual depth data and measurement data of the target object are compared, so as to evaluate the accuracy of the measurement method. The comparison results show that the error rate between the actual distance and the measured distance is less than 3.5%, which can accurately measure the absolute depth of the object in static and short distance, and is superior to other measurement methods.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3270 ◽  
Author(s):  
Hao Cai ◽  
Zhaozheng Hu ◽  
Gang Huang ◽  
Dunyao Zhu ◽  
Xiaocong Su

Self-localization is a crucial task for intelligent vehicles. Existing localization methods usually require high-cost IMU (Inertial Measurement Unit) or expensive LiDAR sensors (e.g., Velodyne HDL-64E). In this paper, we propose a low-cost yet accurate localization solution by using a custom-level GPS receiver and a low-cost camera with the support of HD map. Unlike existing HD map-based methods, which usually requires unique landmarks within the sensed range, the proposed method utilizes common lane lines for vehicle localization by using Kalman filter to fuse the GPS, monocular vision, and HD map for more accurate vehicle localization. In the Kalman filter framework, the observations consist of two parts. One is the raw GPS coordinate. The other is the lateral distance between the vehicle and the lane, which is computed from the monocular camera. The HD map plays the role of providing reference position information and correlating the local lateral distance from the vision and the GPS coordinates so as to formulate a linear Kalman filter. In the prediction step, we propose using a data-driven motion model rather than a Kinematic model, which is more adaptive and flexible. The proposed method has been tested with both simulation data and real data collected in the field. The results demonstrate that the localization errors from the proposed method are less than half or even one-third of the original GPS positioning errors by using low cost sensors with HD map support. Experimental results also demonstrate that the integration of the proposed method into existing ones can greatly enhance the localization results.


2021 ◽  
Author(s):  
Nuo Yu

Abstract Aiming at the problems of the traditional fault location measurement method for sensor nodes, such as more energy consumption and longer measurement time, a fault location measurement method for sensor nodes based on fuzzy control algorithm is designed and proposed. First of all, the fuzzy control algorithm is analyzed; then the clustering based on cluster head diagnosis is carried out for the network, that is, the nodes that meet the cluster head conditions and are set as normal cluster heads are selected as cluster heads. Finally, combined with the fuzzy control algorithm, the fault location of each cluster member node is measured directly by cluster head nodes. The simulation results show that the proposed method has good performance.


2012 ◽  
Vol 430-432 ◽  
pp. 1871-1876
Author(s):  
Hui Bo Bi ◽  
Xiao Dong Xian ◽  
Li Juan Huang

For the problem of tramcar collision accident in coal mine underground, a monocular vision-based tramcar anti-collision warning system based on ARM and FPGA was designed and implemented. In this paper, we present an improved fast lane detection algorithm based on Hough transform. Besides, a new distance measurement and early-warning system based on the invariance of the lane width is proposed. System construction, hardware architecture and software design are given in detail. The experiment results show that the precision and speed of the system can satisfy the application requirement.


2019 ◽  
Vol 48 (3) ◽  
pp. 315001
Author(s):  
劳达宝 LAO Da-bao ◽  
张慧娟 ZHANG Hui-juan ◽  
熊芝 XIONG Zhi ◽  
周维虎 ZHOU Wei-hu

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Wei Li ◽  
Junhua Gu ◽  
Benwen Chen ◽  
Jungong Han

Scene parsing plays a crucial role when accomplishing human-robot interaction tasks. As the “eye” of the robot, RGB-D camera is one of the most important components for collecting multiview images to construct instance-oriented 3D environment semantic maps, especially in unknown indoor scenes. Although there are plenty of studies developing accurate object-level mapping systems with different types of cameras, these methods either process the instance segmentation problem in completed mapping or suffer from a critical real-time issue due to heavy computation processing required. In this paper, we propose a novel method to incrementally build instance-oriented 3D semantic maps directly from images acquired by the RGB-D camera. To ensure an efficient reconstruction of 3D objects with semantic and instance IDs, the input RGB images are operated by a real-time deep-learned object detector. To obtain accurate point cloud cluster, we adopt the Gaussian mixture model as an optimizer after processing 2D to 3D projection. Next, we present a data association strategy to update class probabilities across the frames. Finally, a map integration strategy fuses information about their 3D shapes, locations, and instance IDs in a faster way. We evaluate our system on different indoor scenes including offices, bedrooms, and living rooms from the SceneNN dataset, and the results show that our method not only builds the instance-oriented semantic map efficiently but also enhances the accuracy of the individual instance in the scene.


2011 ◽  
Vol 308-310 ◽  
pp. 1619-1626
Author(s):  
Nan Yin ◽  
Xing Long Zhu ◽  
Xin Zhao ◽  
Shang Gao

When the cylindrical laser shines on the target object, a spot can be obtained, which the edge is a closed curve, marked as C1. The imaging of the curve C1 on the image surface of CCD is a closed curve C2 too. Coordinate system is established to describe the position relationship among camera, image and light source, and to analyze the principle for monocular vision and laser ring to get the information about the object depth. In order to solve the problem and make the above principle clear, the key is to work out the expression for the curve C2 on the image surface of CCD. In order to calculate the closed curve C2 expression, the curve C2 will firstly be divided into two parts, the upper curve and the lower one. According to least-square polynomial, discrete points on the curves of two parts are drawn out, constraints are established and the curve equations are fitted. Then, to verify practicality of this method, a virtual model scene will be created, through which relevant data describing edge of virtual CCD image and that of a virtual spot when the virtual light source alights on the virtual object will be obtained. At last, closed curve equation will be fitted in accordance with data describing edge of virtual image; the position of space object will be fixed by making use of light source equation and closed curve equation; and a contrast will be made between the calculated value and data of the spot edge to prove whether a method to obtain the position of space objects based on monocular vision and laser ring is feasible.


Sign in / Sign up

Export Citation Format

Share Document