scholarly journals A dynamic zone estimation method using cumulative voxels for autonomous driving

2017 ◽  
Vol 14 (1) ◽  
pp. 172988141668713 ◽  
Author(s):  
Seongjo Lee ◽  
Seoungjae Cho ◽  
Sungdae Sim ◽  
Kiho Kwak ◽  
Yong Woon Park ◽  
...  

Obstacle avoidance and available road identification technologies have been investigated for autonomous driving of an unmanned vehicle. In order to apply research results to autonomous driving in real environments, it is necessary to consider moving objects. This article proposes a preprocessing method to identify the dynamic zones where moving objects exist around an unmanned vehicle. This method accumulates three-dimensional points from a light detection and ranging sensor mounted on an unmanned vehicle in voxel space. Next, features are identified from the cumulative data at high speed, and zones with significant feature changes are estimated as zones where dynamic objects exist. The approach proposed in this article can identify dynamic zones even for a moving vehicle and processes data quickly using several features based on the geometry, height map and distribution of three-dimensional space data. The experiment for evaluating the performance of proposed approach was conducted using ground truth data on simulation and real environment data set.

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3410
Author(s):  
Claudia Malzer ◽  
Marcus Baum

High-resolution automotive radar sensors play an increasing role in detection, classification and tracking of moving objects in traffic scenes. Clustering is frequently used to group detection points in this context. However, this is a particularly challenging task due to variations in number and density of available data points across different scans. Modified versions of the density-based clustering method DBSCAN have mostly been used so far, while hierarchical approaches are rarely considered. In this article, we explore the applicability of HDBSCAN, a hierarchical DBSCAN variant, for clustering radar measurements. To improve results achieved by its unsupervised version, we propose the use of cluster-level constraints based on aggregated background information from cluster candidates. Further, we propose the application of a distance threshold to avoid selection of small clusters at low hierarchy levels. Based on exemplary traffic scenes from nuScenes, a publicly available autonomous driving data set, we test our constraint-based approach along with other methods, including label-based semi-supervised HDBSCAN. Our experiments demonstrate that cluster-level constraints help to adjust HDBSCAN to the given application context and can therefore achieve considerably better results than the unsupervised method. However, the approach requires carefully selected constraint criteria that can be difficult to choose in constantly changing environments.


2015 ◽  
Vol 27 (4) ◽  
pp. 430-443 ◽  
Author(s):  
Jun Chen ◽  
◽  
Qingyi Gu ◽  
Tadayoshi Aoyama ◽  
Takeshi Takaki ◽  
...  

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/13.jpg"" width=""300"" /> Blink-spot projection method</div> We present a blink-spot projection method for observing moving three-dimensional (3D) scenes. The proposed method can reduce the synchronization errors of the sequential structured light illumination, which are caused by multiple light patterns projected with different timings when fast-moving objects are observed. In our method, a series of spot array patterns, whose spot sizes change at different timings corresponding to their identification (ID) number, is projected onto scenes to be measured by a high-speed projector. Based on simultaneous and robust frame-to-frame tracking of the projected spots using their ID numbers, the 3D shape of the measuring scene can be obtained without misalignments, even when there are fast movements in the camera view. We implemented our method with a high-frame-rate projector-camera system that can process 512 × 512 pixel images in real-time at 500 fps to track and recognize 16 × 16 spots in the images. Its effectiveness was demonstrated through several 3D shape measurements when the 3D module was mounted on a fast-moving six-degrees-of-freedom manipulator. </span>


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ruihao Lin ◽  
Junzhe Xu ◽  
Jianhua Zhang

Purpose Large-scale and precise three-dimensional (3D) map play an important role in autonomous driving and robot positioning. However, it is difficult to get accurate poses for mapping. On one hand, the global positioning system (GPS) data are not always reliable owing to multipath effect and poor satellite visibility in many urban environments. In another hand, the LiDAR-based odometry has accumulative errors. This paper aims to propose a novel simultaneous localization and mapping (SLAM) system to obtain large-scale and precise 3D map. Design/methodology/approach The proposed SLAM system optimally integrates the GPS data and a LiDAR odometry. In this system, two core algorithms are developed. To effectively verify reliability of the GPS data, VGL (the abbreviation of Verify GPS data with LiDAR data) algorithm is proposed and the points from LiDAR are used by the algorithm. To obtain accurate poses in GPS-denied areas, this paper proposes EG-LOAM algorithm, a LiDAR odometry with local optimization strategy to eliminate the accumulative errors by means of reliable GPS data. Findings On the KITTI data set and the customized outdoor data set, the system is able to generate high-precision 3D map in both GPS-denied areas and areas covered by GPS. Meanwhile, the VGL algorithm is proved to be able to verify reliability of the GPS data with confidence and the EG-LOAM outperform the state-of-the-art baselines. Originality/value A novel SLAM system is proposed to obtain large-scale and precise 3D map. To improve the robustness of the system, the VGL algorithm and the EG-LOAM are designed. The whole system as well as the two algorithms have a satisfactory performance in experiments.


1999 ◽  
Vol 122 (3) ◽  
pp. 493-501 ◽  
Author(s):  
Woong-Chul Choi ◽  
Yann G. Guezennec

The work described in this paper focuses on experiments to quantify the initial fuel mixing and gross fuel distribution in the cylinder during the intake stroke and its relationship to the large-scale convective flow field. The experiments were carried out in a water analog engine simulation rig, and, hence, limited to the intake stroke. The same engine head configuration was used for the three-dimensional PTV flow field and the PLIF fuel concentration measurements. High-speed CCD cameras were used to record the time evolution of the dye convection and mixing with a 1/4 deg of crank angle resolution (and were also used for the three-dimensional PTV measurements). The captured sequences of images were digitally processed to correct for background light non-uniformity and other spurious effects. The results are finely resolved evolution of the dye concentration maps in the center tumble plane. The three-dimensional PTV measurements show that the flow is characterized by a strong tumble, as well as pairs of cross-tumble, counter-rotating eddies. The results clearly show the advection of a fuel-rich zone along the wall opposite to the intake valves and later along the piston crown. It also shows that strong out-of-plane motions further contribute to the cross-stream mixing to result in a relatively uniform concentration at BDC, albeit slightly stratified by the lean fluid entering the cylinder later in the intake stroke. In addition to obtaining phase-averaged concentration maps at various crank angles throughout the intake stroke, the same data set is processed for a large number of cycle to extract spatial statistics of the cycle-to-cycle variability and spatial non-uniformity of the concentration maps. The combination of the three-dimensional PTV and PLIF measurements provides a very detailed understanding of the advective mixing properties of the intake-generated flow field. [S0742-4795(00)00103-4]


2020 ◽  
Vol 10 (1) ◽  
pp. 385
Author(s):  
Yuanlong Deng ◽  
Xizhou Pan ◽  
Xiaopin Zhong

In the industry of polymer film products such as polarizers, measuring the three-dimensional (3D) contour of the transparent microdefects, the most common defects, can crucially affect what further treatment should be taken. In this paper, we propose an efficient method for estimating the 3D shape of defects based on regression by converting the problem of direct measurement into an estimation problem using two-dimensional imaging. The basic idea involves acquiring structured-light saturated imaging data on transparent microdefects; integrating confocal microscopy measurement data to create a labeled data set, on which dimensionality reduction is performed; using support vector regression on a low-dimensional small-set space to establish the relationship between the saturated image and defects’ 3D attributes; and predicting the shape of new defect samples by applying the learned relationship to their saturated images. In the discriminant subspace, the manifold of saturated images can clearly show the changing attributes of defects’ 3D shape, such as depth and width. The experimental results show that the mean relative error (MRE) of the defect depth is 3.64% and the MRE of the defect width is 1.96%. The estimation time consumed in the Matlab platform is less than 0.01 s. Compared with precision measuring instruments such as confocal microscopes, our estimation method greatly improves the efficiency of quality control and meets the accuracy requirement of automated defect identification. It is therefore suitable for complete inspection of products.


2020 ◽  
Author(s):  
Philipp Ulbrich ◽  
Alexander Gail

AbstractOngoing goal-directed movements can be rapidly adjusted following new environmental information, e.g. when chasing pray or foraging. This makes movement trajectories in go-before-you-know decision-making a suitable behavioral readout of the ongoing decision process. Yet, existing methods of movement analysis are often based on statistically comparing two groups of trial-averaged trajectories and are not easily applied to three-dimensional data, preventing them from being applicable to natural free behavior. We developed and tested the cone method to estimate the point of overt commitment (POC) along a single two- or three-dimensional trajectory, i.e. the position where movement is adjusted towards a newly selected spatial target. In Experiment 1, we established a “ground truth” data set in which the cone method successfully identified the experimentally constrained POCs across a wide range of all but the shallowest adjustment angles. In Experiment 2, we demonstrate the power of the method in a typical decision-making task with expected decision time differences known from previous findings. The POCs identified by cone method matched these expected effects. In both experiments, we compared the cone method’s single trial performance with a trial-averaging method and obtained comparable results. We discuss the advantages of the single-trajectory cone method over trial-averaging methods and possible applications beyond the examples presented in this study. The cone method provides a distinct addition to existing tools used to study decisions during ongoing movement behavior, which we consider particularly promising towards studies of non-repetitive free behavior.


2021 ◽  
Vol 18 (4) ◽  
pp. 172988142110383
Author(s):  
Ke Wang ◽  
Xuejing Li ◽  
Jianhua Yang ◽  
Jun Wu ◽  
Ruifeng Li

Human action segmentation and recognition from the continuous untrimmed sensor data stream is a challenging issue known as temporal action detection. This article provides a two-stream You Only Look Once-based network method, which fuses video and skeleton streams captured by a Kinect sensor, and our data encoding method is used to turn the spatiotemporal temporal action detection into a one-dimensional object detection problem in constantly augmented feature space. The proposed approach extracts spatial–temporal three-dimensional convolutional neural network features from video stream and view-invariant features from skeleton stream, respectively. Furthermore, these two streams are encoded into three-dimensional feature spaces, which are represented as red, green, and blue images for subsequent network input. We proposed the two-stream You Only Look Once-based networks which are capable of fusing video and skeleton information by using the processing pipeline to provide two fusion strategies, boxes-fusion or layers-fusion. We test the temporal action detection performance of two-stream You Only Look Once network based on our data set High-Speed Interplanetary Tug/Cocoon Vehicles-v1, which contains seven activities in the home environment and achieve a particularly high mean average precision. We also test our model on the public data set PKU-MMD that contains 51 activities, and our method also has a good performance on this data set. To prove that our method can work efficiently on robots, we transplanted it to the robotic platform and an online fall down detection experiment.


Author(s):  
Shaw-Pin Miaou ◽  
Roger P. Bligh ◽  
Dominique Lord

Guidelines for the installation of median barriers presented in the AASHTO Roadside Design Guide have remained essentially unchanged for more than 30 years. In recent years, the need for improved guidance has prompted several states to reevaluate their guidelines and has also precipitated a nationwide research project administered by the Transportation Research Board. The objective of the study, on which this paper is based, was to develop improved guidelines for the use of median barriers on new and existing high-speed, multilane, divided highways in Texas. The purpose here is to present some modeling and benefit–cost analysis results from that study, with a focus on the results from a particular data set developed under a cross-sectional with–without study design. The highways of interest are those classified as Interstates, freeways, and expressways with four or more lanes and posted speed limits of 55 mph (88 km/h) or higher. The models employed to estimate median-related crash frequencies and severities, including the Poisson-gamma and ordered multinomial logit models as well as modeling results from a full Bayes estimation method, are presented. From the modeling results, a preliminary benefit–cost analysis is described, in conjunction with some sensitivity analyses, for developing the guidelines for concrete and high-tension-cable barriers. A discussion of the limitations of this study and potential future extensions is provided.


Author(s):  
Philipp Ulbrich ◽  
Alexander Gail

AbstractOngoing goal-directed movements can be rapidly adjusted following new environmental information, e.g., when chasing pray or foraging. This makes movement trajectories in go-before-you-know decision-making a suitable behavioral readout of the ongoing decision process. Yet, existing methods of movement analysis are often based on statistically comparing two groups of trial-averaged trajectories and are not easily applied to three-dimensional data, preventing them from being applicable to natural free behavior. We developed and tested the cone method to estimate the point of overt commitment (POC) along a single two- or three-dimensional trajectory, i.e., the position where the movement is adjusted towards a newly selected spatial target. In Experiment 1, we established a “ground truth” data set in which the cone method successfully identified the experimentally constrained POCs across a wide range of all but the shallowest adjustment angles. In Experiment 2, we demonstrate the power of the method in a typical decision-making task with expected decision time differences known from previous findings. The POCs identified by cone method matched these expected effects. In both experiments, we compared the cone method’s single trial performance with a trial-averaging method and obtained comparable results. We discuss the advantages of the single-trajectory cone method over trial-averaging methods and possible applications beyond the examples presented in this study. The cone method provides a distinct addition to existing tools used to study decisions during ongoing movement behavior, which we consider particularly promising towards studies of non-repetitive free behavior.


2021 ◽  
Vol 11 (6) ◽  
pp. 2522
Author(s):  
Xiuzhang Huang ◽  
Yiping Cao ◽  
Chaozhi Yang ◽  
Yujiao Zhang ◽  
Jie Gao

A single-shot three-dimensional measuring method based on quadrature phase-shifting color composite grating projection is proposed. Firstly, three quadrature phase-shifting sinusoidal gratings are encoded in red (R), green (G), and blue (B) channels respectively, composed single- frame color composite grating. This color composite grating is projecting obliquely on the object by DLP. After that, the color camera which is placed in a specific location is used to capture the corresponding color deformed pattern and send it to the PC. Then, by color separation, the color deformed pattern is demodulated as the corresponding three-frame monochromatic deformed patterns with a shifted quadrature phase. Due to the existences of sensitivity differences and color crosstalk among the tricolor channels, we propose a gray imbalance correction method based on the DC component’s consistency approximation. By the established 3D reconstruction physical model, the measurement of 3D shape can be achieved. Many experimental results for static and moving objects prove the proposed method’s feasibility and practicability. Owing to the single-shot feature of the proposed method, it has a good application prospect in real-time and high-speed 3D measurement.


Sign in / Sign up

Export Citation Format

Share Document