scholarly journals $$360^{\circ }$$ real-time and power-efficient 3D DAMOT for autonomous driving applications

Author(s):  
Carlos Gómez-Huélamo ◽  
Javier Del Egido ◽  
Luis Miguel Bergasa ◽  
Rafael Barea ◽  
Elena López-Guillén ◽  
...  

AbstractAutonomous Driving (AD) promises an efficient, comfortable and safe driving experience. Nevertheless, fatalities involving vehicles equipped with Automated Driving Systems (ADSs) are on the rise, especially those related to the perception module of the vehicle. This paper presents a real-time and power-efficient 3D Multi-Object Detection and Tracking (DAMOT) method proposed for Intelligent Vehicles (IV) applications, allowing the vehicle to track $$360^{\circ }$$ 360 ∘ surrounding objects as a preliminary stage to perform trajectory forecasting to prevent collisions and anticipate the ego-vehicle to future traffic scenarios. First, we present our DAMOT pipeline based on Fast Encoders for object detection and a combination of a 3D Kalman Filter and Hungarian Algorithm, used for state estimation and data association respectively. We extend our previous work ellaborating a preliminary version of sensor fusion based DAMOT, merging the extracted features by a Convolutional Neural Network (CNN) using camera information for long-term re-identification and obstacles retrieved by the 3D object detector. Both pipelines exploit the concepts of lightweight Linux containers using the Docker approach to provide the system with isolation, flexibility and portability, and standard communication in robotics using the Robot Operating System (ROS). Second, both pipelines are validated using the recently proposed KITTI-3DMOT evaluation tool that demonstrates the full strength of 3D localization and tracking of a MOT system. Finally, the most efficient architecture is validated in some interesting traffic scenarios implemented in the CARLA (Car Learning to Act) open-source driving simulator and in our real-world autonomous electric car using the NVIDIA AGX Xavier, an AI embedded system for autonomous machines, studying its performance in a controlled but realistic urban environment with real-time execution (results).

2020 ◽  
Vol 20 (20) ◽  
pp. 11959-11966
Author(s):  
Jiachen Yang ◽  
Chenguang Wang ◽  
Huihui Wang ◽  
Qiang Li

Author(s):  
Yuan Shi ◽  
Wenhui Huang ◽  
Federico Cheli ◽  
Monica Bordegoni ◽  
Giandomenico Caruso

Abstract A bursting number of achievements in the autonomous vehicle industry have been obtained during the past decades. Various systems have been developed to make automated driving possible. Due to the algorithm used in the autonomous vehicle system, the performance of the vehicle differs from one to another. However, very few studies have given insight into the influence caused by implementing different algorithms from a human factors point of view. Two systems based on two algorithms with different characteristics are utilized to generate the two driving styles of the autonomous vehicle, which are implemented into a driving simulator in order to create the autonomous driving experience. User’s skin conductance (SC) data, which enables the evaluation of user’s cognitive workload and mental stress were recorded and analyzed. Subjective measures were applied by filling out Swedish occupational fatigue inventory (SOFI-20) to get a user self-reporting perspective view of their behavior changes along with the experiments. The results showed that human’s states were affected by the driving styles of different autonomous systems, especially in the period of speed variation. By analyzing users’ self-assessment data, a correlation was observed between the user “Sleepiness” and the driving style of the autonomous vehicle. These results would be meaningful for the future development of the autonomous vehicle systems, in terms of balancing the performance of the vehicle and user’s experience.


Author(s):  
B. Ravi Kiran ◽  
Luis Roldão ◽  
Beñat Irastorza ◽  
Renzo Verastegui ◽  
Sebastian Süss ◽  
...  

2018 ◽  
Vol 8 (4) ◽  
pp. 35 ◽  
Author(s):  
Jörg Fickenscher ◽  
Sandra Schmidt ◽  
Frank Hannig ◽  
Mohamed Bouzouraa ◽  
Jürgen Teich

The sector of autonomous driving gains more and more importance for the car makers. A key enabler of such systems is the planning of the path the vehicle should take, but it can be very computationally burdensome finding a good one. Here, new architectures in ECU are required, such as GPU, because standard processors struggle to provide enough computing power. In this work, we present a novel parallelization of a path planning algorithm. We show how many paths can be reasonably planned under real-time requirements and how they can be rated. As an evaluation platform, an Nvidia Jetson board equipped with a Tegra K1 SoC was used, whose GPU is also employed in the zFAS ECU of the AUDI AG.


2021 ◽  
Vol 2065 (1) ◽  
pp. 012020
Author(s):  
Nver Ren ◽  
Rong Jiang ◽  
Dongze Zhang

Abstract An cloud computing platform based on B/S architecture and docker container technology for autonomous driving simulation has been established in this paper. The map editor module of the cloud platform lets users design 3D scenes for simulating and testing automated driving systems. When the customized roadway scene for simulation created, it would be saved as OpenDrive format both for the server of cloud platform and CarMaker’s TestRun which all parameters of the virtual environment (vehicle, road, tires, etc.) are sufficiently defined. Then, based on the application online (APO) communication protocol of CarMaker, the local APO agent service was created. When the 27 parameters of vehicle dynamics received from CarMaker server, they were sent to the cloud platform in real time through UPD protocol. The process of data communication is completed by APO agent. Through the work above, a co-simulation between cloud platform and CarMaker could be successfully established for autonomous driving with seventeen-degree-of-freedom. Through the co-simulation experiment, it is found that the real-time data sampling frequency of the co-simulation is 70Hz, which completes the synchronous simulation of carmaker and cloud platform.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1932
Author(s):  
Malik Haris ◽  
Adam Glowacz

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.


Author(s):  
Anna Feldhütter ◽  
Alexander Feierle ◽  
Luis Kalb ◽  
Klaus Bengler

Vehicles with conditional automation will be introduced to the market in the next few years. However, the effect of fatigue as one component of the driver state on the take-over performance still needs to be quantified. To examine this question, a valid, real-time capable and preferably non-invasive method for assessing fatigue while driving automatically is required. For this purpose, we developed an objective driver fatigue assessment system based on the data of a commercial remote eye-tracking system. The fatigue assessment system fuses various metrics based on eyelid opening and head movement. In a validation study with 12 participants in a driving simulator, the fatigue assessment system achieved a sensitivity of 90.0 % and a specificity of 99.2 %. This approach makes a fatigue-state-dependent study design possible and can also provide a basis for advancing existing fatigue assessment systems in automated vehicles.


Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 301 ◽  
Author(s):  
Alex Dominguez-Sanchez ◽  
Miguel Cazorla ◽  
Sergio Orts-Escolano

In recent years, we have seen a large growth in the number of applications which use deep learning-based object detectors. Autonomous driving assistance systems (ADAS) are one of the areas where they have the most impact. This work presents a novel study evaluating a state-of-the-art technique for urban object detection and localization. In particular, we investigated the performance of the Faster R-CNN method to detect and localize urban objects in a variety of outdoor urban videos involving pedestrians, cars, bicycles and other objects moving in the scene (urban driving). We propose a new dataset that is used for benchmarking the accuracy of a real-time object detector (Faster R-CNN). Part of the data was collected using an HD camera mounted on a vehicle. Furthermore, some of the data is weakly annotated so it can be used for testing weakly supervised learning techniques. There already exist urban object datasets, but none of them include all the essential urban objects. We carried out extensive experiments demonstrating the effectiveness of the baseline approach. Additionally, we propose an R-CNN plus tracking technique to accelerate the process of real-time urban object detection.


Sign in / Sign up

Export Citation Format

Share Document