average angular error
Recently Published Documents


TOTAL DOCUMENTS

6
(FIVE YEARS 3)

H-INDEX

1
(FIVE YEARS 0)

Energies ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 4885
Author(s):  
Paolo Visconti ◽  
Francesco Iaia ◽  
Roberto De Fazio ◽  
Nicola Ivan Giannoccaro

There are many car tests regulated by European and international standards and carried out on tracks to assess vehicle performance. The test preparation phase usually consists of placing road cones on the track with a specific configuration defined by the considered standard; this phase is performed by human operators using imprecise and slow methods, mainly due to the large required distances. In this paper, a new geolocation stake-out system based on GNSS RTK technology was realized and tested, supported by a Matlab-based software application to allow the user to quickly and precisely locate the on-track points on which to position the road cones. The realized stake-out system, innovative and very simple to use, produced negligible average errors (i.e., 2.4–2.9 cm) on the distance between the staked-out points according to the reference standards (distance percentage error 0.29–0.47%). Furthermore, the measured average angular error was also found to be very low, in the range 0.04–0.18°. Finally, ISO 3888-1 and ISO 3888-2 test configurations were reproduced on the proving ground of the Porsche Technical Center by utilizing the realized stake-out system to perform a double lane-change maneuver on car prototypes.



Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4769
Author(s):  
Cristina Palmero ◽  
Abhishek Sharma ◽  
Karsten Behrendt ◽  
Kapil Krishnakumar ◽  
Oleg V. Komogortsev ◽  
...  

This paper summarizes the OpenEDS 2020 Challenge dataset, the proposed baselines, and results obtained by the top three winners of each competition: (1) Gaze prediction Challenge, with the goal of predicting the gaze vector 1 to 5 frames into the future based on a sequence of previous eye images, and (2) Sparse Temporal Semantic Segmentation Challenge, with the goal of using temporal information to propagate semantic eye labels to contiguous eye image frames. Both competitions were based on the OpenEDS2020 dataset, a novel dataset of eye-image sequences captured at a frame rate of 100 Hz under controlled illumination, using a virtual-reality head-mounted display with two synchronized eye-facing cameras. The dataset, which we make publicly available for the research community, consists of 87 subjects performing several gaze-elicited tasks, and is divided into 2 subsets, one for each competition task. The proposed baselines, based on deep learning approaches, obtained an average angular error of 5.37 degrees for gaze prediction, and a mean intersection over union score (mIoU) of 84.1% for semantic segmentation. The winning solutions were able to outperform the baselines, obtaining up to 3.17 degrees for the former task and 95.2% mIoU for the latter.



Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3507
Author(s):  
Hrvoje Kalinić ◽  
Zvonimir Bilokapić ◽  
Frano Matić

The experiments conducted on the wind data provided by the European Centre for Medium-range Weather Forecasts show that 1% of the data is sufficient to reconstruct the other 99% with an average amplitude error of less than 0.5 m/s and an average angular error of less than 5 degrees. In a nutshell, our method provides an approach where a portion of the data is used as a proxy to estimate the measurements over the entire domain based only on a few measurements. In our study, we compare several machine learning techniques, namely: linear regression, K-nearest neighbours, decision trees and a neural network, and investigate the impact of sensor placement on the quality of the reconstruction. While methods provide comparable results the results show that sensor placement plays an important role. Thus, we propose that intelligent location selection for sensor placement can be done using k-means, and show that this indeed leads to increase in accuracy as compared to random sensor placement.



Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 105
Author(s):  
Xin He ◽  
Takafumi Matsumaru

This paper introduces a system that can estimate the deformation process of a deformed flat object (folded plane) and generate the input data for a robot with human-like dexterous hands and fingers to reproduce the same deformation of another similar object. The system is based on processing RGB data and depth data with three core techniques: a weighted graph clustering method for non-rigid point matching and clustering; a refined region growing method for plane detection on depth data based on an offset error defined by ourselves; and a novel sliding checking model to check the bending line and adjacent relationship between each pair of planes. Through some evaluation experiments, we show the improvement of the core techniques to conventional studies. By applying our approach to different deformed papers, the performance of the entire system is confirmed to have around 1.59 degrees of average angular error, which is similar to the smallest angular discrimination of human eyes. As a result, for the deformation of the flat object caused by folding, if our system can get at least one feature point cluster on each plane, it can get spatial information of each bending line and each plane with acceptable accuracy. The subject of this paper is a folded plane, but we will develop it into a robotic reproduction of general object deformation.



Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6943
Author(s):  
Tao Xie ◽  
Ke Wang ◽  
Ruifeng Li ◽  
Xinyue Tang

The traditional CNN for 6D robot relocalization which outputs pose estimations does not interpret whether the model is making sensible predictions or just guessing at random. We found that convnet representations trained on classification problems generalize well to other tasks. Thus, we propose a multi-task CNN for robot relocalization, which can simultaneously perform pose regression and scene recognition. Scene recognition determines whether the input image belongs to the current scene in which the robot is located, not only reducing the error of relocalization but also making us understand with what confidence we can trust the prediction. Meanwhile, we found that when there is a large visual difference between testing images and training images, the pose precision becomes low. Based on this, we present the dual-level image-similarity strategy (DLISS), which consists of two levels: initial level and iteration-level. The initial level performs feature vector clustering in the training set and feature vector acquisition in testing images. The iteration level, namely, the PSO-based image-block selection algorithm, can select the testing images which are the most similar to training images based on the initial level, enabling us to gain higher pose accuracy in testing set. Our method considers both the accuracy and the robustness of relocalization, and it can operate indoors and outdoors in real time, taking at most 27 ms per frame to compute. Finally, we used the Microsoft 7Scenes dataset and the Cambridge Landmarks dataset to evaluate our method. It can obtain approximately 0.33 m and 7.51∘ accuracy on 7Scenes dataset, and get approximately 1.44 m and 4.83∘ accuracy on the Cambridge Landmarks dataset. Compared with PoseNet, our CNN reduced the average positional error by 25% and the average angular error by 27.79% on 7Scenes dataset, and reduced the average positional error by 40% and the average angular error by 28.55% on the Cambridge Landmarks dataset. We show that our multi-task CNN can localize from high-level features and is robust to images which are not in the current scene. Furthermore, we show that our multi-task CNN gets higher accuracy of relocalization by using testing images obtained by DLISS.



Sign in / Sign up

Export Citation Format

Share Document