A detailed approach to autonomous vehicle control through Ros and Pixhawk controllers

2021 ◽  
Author(s):  
Brian Quinn ◽  
Jordan Bates ◽  
Michael Parker ◽  
Sally Shoop

A Polaris MRZR military utility vehicle was used as a testing platform to develop a novel, low cost yet feature-rich, approach to adding remote operation and autonomous driving capability to a military vehicle. The main concept of operation adapts steering and throttle output from a low cost commercially available Pixhawk autopilot controller and translates the signal into the necessary inputs for the Robot Operating System (ROS) based drive by wire system integrated into the MRZR. With minimal modification these enhancements could be applied to any vehicle with similar ROS integration. This paper details the methods and testing approach used to develop this autonomous driving capability.

Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1389 ◽  
Author(s):  
Joon Rhee ◽  
Jiwon Seo

Curb detection and localization systems constitute an important aspect of environmental recognition systems of autonomous driving vehicles. This is because detecting curbs can provide information about the boundary of a road, which can be used as a safety system to prevent unexpected intrusions into pedestrian walkways. Moreover, curb detection and localization systems enable the autonomous vehicle to recognize the surrounding environment and the lane in which the vehicle is driving. Most existing curb detection and localization systems use multichannel light detection and ranging (lidar) as a primary sensor. However, although lidar demonstrates high performance, it is too expensive to be used for commercial vehicles. In this paper, we use ultrasonic sensors to implement a practical, low-cost curb detection and localization system. To compensate for the relatively lower performance of ultrasonic sensors as compared to other higher-cost sensors, we used multiple ultrasonic sensors and applied a series of novel processing algorithms that overcome the limitations of a single ultrasonic sensor and conventional algorithms. The proposed algorithms consisted of a ground reflection elimination filter, a measurement reliability calculation, and distance estimation algorithms corresponding to the reliability of the obtained measurements. The performance of the proposed processing algorithms was demonstrated by a field test under four representative curb scenarios. The availability of reliable distance estimates from the proposed methods with three ultrasonic sensors was significantly higher than that from the other methods, e.g., 92.08% vs. 66.34%, when the test vehicle passed a trapezoidal-shaped road shoulder. When four ultrasonic sensors were used, 96.04% availability and 13.50 cm accuracy (root mean square error) were achieved.


2018 ◽  
Author(s):  
Yi Chen ◽  
Sagar Manglani ◽  
Roberto Merco ◽  
Drew Bolduc

In this paper, we discuss several of major robot/vehicle platforms available and demonstrate the implementation of autonomous techniques on one such platform, the F1/10. Robot Operating System was chosen for its existing collection of software tools, libraries, and simulation environment. We build on the available information for the F1/10 vehicle and illustrate key tools that will help achieve properly functioning hardware. We provide methods to build algorithms and give examples of deploying these algorithms to complete autonomous driving tasks and build 2D maps using SLAM. Finally, we discuss the results of our findings and how they can be improved.


2021 ◽  
Vol 10 (3) ◽  
pp. 42
Author(s):  
Mohammed Al-Nuaimi ◽  
Sapto Wibowo ◽  
Hongyang Qu ◽  
Jonathan Aitken ◽  
Sandor Veres

The evolution of driving technology has recently progressed from active safety features and ADAS systems to fully sensor-guided autonomous driving. Bringing such a vehicle to market requires not only simulation and testing but formal verification to account for all possible traffic scenarios. A new verification approach, which combines the use of two well-known model checkers: model checker for multi-agent systems (MCMAS) and probabilistic model checker (PRISM), is presented for this purpose. The overall structure of our autonomous vehicle (AV) system consists of: (1) A perception system of sensors that feeds data into (2) a rational agent (RA) based on a belief–desire–intention (BDI) architecture, which uses a model of the environment and is connected to the RA for verification of decision-making, and (3) a feedback control systems for following a self-planned path. MCMAS is used to check the consistency and stability of the BDI agent logic during design-time. PRISM is used to provide the RA with the probability of success while it decides to take action during run-time operation. This allows the RA to select movements of the highest probability of success from several generated alternatives. This framework has been tested on a new AV software platform built using the robot operating system (ROS) and virtual reality (VR) Gazebo Simulator. It also includes a parking lot scenario to test the feasibility of this approach in a realistic environment. A practical implementation of the AV system was also carried out on the experimental testbed.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5257
Author(s):  
Franc Dimc ◽  
Polona Pavlovčič-Prešeren ◽  
Matej Bažec

Robust autonomous driving, as long as it relies on satellite-based positioning, requires carrier-phase-based algorithms, among other types of data sources, to obtain precise and true positions, which is also primarily true for the use of GNSS geodetic receivers, but also increasingly true for mass-market devices. The experiment was conducted under line-of-sight conditions on a straight road during a period of no traffic. The receivers were positioned on the roof of a car travelling at low speed in the presence of a static jammer, while kinematic relative positioning was performed with the static reference base receiver. Interference mitigation techniques in the GNSS receivers used, which were unknown to the authors, were compared using (a) the observed carrier-to-noise power spectral density ratio as an indication of the receivers’ ability to improve signal quality, and (b) the post-processed position solutions based on RINEX-formatted data. The observed carrier-to-noise density generally exerts the expected dependencies and leaves space for comparisons of applied processing abilities in the receivers, while conclusions on the output data results comparison are limited due to the non-synchronized clocks of the receivers. According to our current and previous results, none of the GNSS receivers used in the experiments employs an effective type of complete mitigation technique adapted to the chirp jammer.


2021 ◽  
Author(s):  
Ruben Leon ◽  
Alexis Tinoco ◽  
Daniela Cando ◽  
Manolo Paredes ◽  
Fernando Lara

Author(s):  
Hyun Choi ◽  
Wan-Chin Kim

Mechaless LiDAR technology, which does not have a mechanical drive part, has been actively studied in order to increase the reliability of the LiDAR device at low cost and drive environment in order to more actively apply LiDAR technology to autonomous driving. Mechaless LiDAR technology, which has been mainly studied recently, includes 3D Flash LiDAR technology, MEMS mirror utilization method, and OPA (Optical Phased Array). However, these methods have not been developed rapidly as a key technology for achieving autonomous driving due to low stability of driving environment or remarkably low measurable distance and FOV (field of view) compared with mechanical LiDAR. In this study, we investigated the improvement of FOV by using a flux-deflecting liquid lens and a fisheye lens that can achieve fine spatial resolution through continuous voltage regulation. Based on the initial design results, it was examined that the FOV can be secured to 80 ° or more by utilizing a relatively simple fisheye lens composed of only spherical lenses.


Author(s):  
Hrishikesh Dey ◽  
Rithika Ranadive ◽  
Abhishek Chaudhari

Path planning algorithm integrated with a velocity profile generation-based navigation system is one of the most important aspects of an autonomous driving system. In this paper, a real-time path planning solution to obtain a feasible and collision-free trajectory is proposed for navigating an autonomous car on a virtual highway. This is achieved by designing the navigation algorithm to incorporate a path planner for finding the optimal path, and a velocity planning algorithm for ensuring a safe and comfortable motion along the obtained path. The navigation algorithm was validated on the Unity 3D Highway-Simulated Environment for practical driving while maintaining velocity and acceleration constraints. The autonomous vehicle drives at the maximum specified velocity until interrupted by vehicular traffic, whereas then, the path planner, based on the various constraints provided by the simulator using µWebSockets, decides to either decelerate the vehicle or shift to a more secure lane. Subsequently, a splinebased trajectory generation for this path results in continuous and smooth trajectories. The velocity planner employs an analytical method based on trapezoidal velocity profile to generate velocities for the vehicle traveling along the precomputed path. To provide smooth control, an s-like trapezoidal profile is considered that uses a cubic spline for generating velocities for the ramp-up and ramp-down portions of the curve. The acceleration and velocity constraints, which are derived from road limitations and physical systems, are explicitly considered. Depending upon these constraints and higher module requirements (e.g., maintaining velocity, and stopping), an appropriate segment of the velocity profile is deployed. The motion profiles for all the use-cases are generated and verified graphically.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


2020 ◽  
Author(s):  
Yazhou Li ◽  
Yahong Rosa Zheng

This paper presents two methods, tegrastats GUI version jtop and Nsight Systems, to profile NVIDIA Jetson embedded GPU devices on a model race car which is a great platform for prototyping and field testing autonomous driving algorithms. The two profilers analyze the power consumption, CPU/GPU utilization, and the run time of CUDA C threads of Jetson TX2 in five different working modes. The performance differences among the five modes are demonstrated using three example programs: vector add in C and CUDA C, a simple ROS (Robot Operating System) package of the wall follow algorithm in Python, and a complex ROS package of the particle filter algorithm for SLAM (Simultaneous Localization and Mapping). The results show that the tools are effective means for selecting operating mode of the embedded GPU devices.


2019 ◽  
Vol 8 (6) ◽  
pp. 288 ◽  
Author(s):  
Kelvin Wong ◽  
Ehsan Javanmardi ◽  
Mahdi Javanmardi ◽  
Shunsuke Kamijo

Accurately and precisely knowing the location of the vehicle is a critical requirement for safe and successful autonomous driving. Recent studies suggest that error for map-based localization methods are tightly coupled with the surrounding environment. Considering this relationship, it is therefore possible to estimate localization error by quantifying the representation and layout of real-world phenomena. To date, existing work on estimating localization error have been limited to using self-collected 3D point cloud maps. This paper investigates the use of pre-existing 2D geographic information datasets as a proxy to estimate autonomous vehicle localization error. Seven map evaluation factors were defined for 2D geographic information in a vector format, and random forest regression was used to estimate localization error for five experiment paths in Shinjuku, Tokyo. In the best model, the results show that it is possible to estimate autonomous vehicle localization error with 69.8% of predictions within 2.5 cm and 87.4% within 5 cm.


Sign in / Sign up

Export Citation Format

Share Document