scholarly journals SAGE: A S plit- A rchitecture Methodolo g y for E fficient End-to-End Autonomous Vehicle Control

2021 ◽  
Vol 20 (5s) ◽  
pp. 1-22
Author(s):  
Arnav Malawade ◽  
Mohanad Odema ◽  
Sebastien Lajeunesse-degroot ◽  
Mohammad Abdullah Al Faruque

Autonomous vehicles (AV) are expected to revolutionize transportation and improve road safety significantly. However, these benefits do not come without cost; AVs require large Deep-Learning (DL) models and powerful hardware platforms to operate reliably in real-time, requiring between several hundred watts to one kilowatt of power. This power consumption can dramatically reduce vehicles’ driving range and affect emissions. To address this problem, we propose SAGE: a methodology for selectively offloading the key energy-consuming modules of DL architectures to the cloud to optimize edge, energy usage while meeting real-time latency constraints. Furthermore, we leverage Head Network Distillation (HND) to introduce efficient bottlenecks within the DL architecture in order to minimize the network overhead costs of offloading with almost no degradation in the model’s performance. We evaluate SAGE using an Nvidia Jetson TX2 and an industry-standard Nvidia Drive PX2 as the AV edge, devices and demonstrate that our offloading strategy is practical for a wide range of DL models and internet connection bandwidths on 3G, 4G LTE, and WiFi technologies. Compared to edge-only computation, SAGE reduces energy consumption by an average of 36.13% , 47.07% , and 55.66% for an AV with one low-resolution camera, one high-resolution camera, and three high-resolution cameras, respectively. SAGE also reduces upload data size by up to 98.40% compared to direct camera offloading.

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Derek Hungness ◽  
Raj Bridgelall

The adoption of connected and autonomous vehicles (CAVs) is in its infancy. Therefore, very little is known about their potential impacts on traffic. Meanwhile, researchers and market analysts predict a wide range of possibilities about their potential benefits and the timing of their deployments. Planners traditionally use various types of travel demand models to forecast future traffic conditions. However, such models do not yet integrate any expected impacts from CAV deployments. Consequently, many long-range transportation plans do not yet account for their eventual deployment. To address some of these uncertainties, this work modified an existing model for Madison, Wisconsin. To compare outcomes, the authors used identical parameter changes and simulation scenarios for a model of Gainesville, Florida. Both models show that with increasing levels of CAV deployment, both the vehicle miles traveled and the average congestion speed will increase. However, there are some important exceptions due to differences in the road network layout, geospatial features, sociodemographic factors, land-use, and access to transit.


2021 ◽  
Vol 336 ◽  
pp. 07004
Author(s):  
Ruoyu Fang ◽  
Cheng Cai

Obstacle detection and target tracking are two major issues for intelligent autonomous vehicles. This paper proposes a new scheme to achieve target tracking and real-time obstacle detection of obstacles based on computer vision. ResNet-18 deep learning neural network is utilized for obstacle detection and Yolo-v3 deep learning neural network is employed for real-time target tracking. These two trained models can be deployed on an autonomous vehicle equipped with an NVIDIA Jetson Nano motherboard. The autonomous vehicle moves to avoid obstacles and follow tracked targets by camera. Adjusting the steering and movement of the autonomous vehicle according to the PID algorithm during the movement, therefore, will help the proposed vehicle achieve stable and precise tracking.


2014 ◽  
Vol 513-517 ◽  
pp. 2476-2479 ◽  
Author(s):  
Qiong Wu ◽  
Yao Tian Zhang ◽  
Jun Wang

JPEG decoding algorithm has become an international mainstream image compression standard, because of its wide range of applications, easy implementation, supporting for lossless compression and other characteristics [. This thesis is to explain how to design a high-resolution JPEG image decoding system architecture, which supports for real-time display and has good scalability as well. We choosing newly developed ZedBoard development board of Xilinx corporation as development platform and EDK (Embedded Development Kit) as development environment [. The design flow is to read JPEG stream data stored in DDR and store the decoding data in DDR after finishing the hardware decoding. Finally we use VDMA to translate the stream in order to display on a monitor connected to the HDMI interface. In this system, we adopt AXI bus with a hierarchical technology to achieve IP interconnection, adopt hardware decoding to achieve high-resolution image decoding and adopt VDMA hardware data movement to achieve real-time display based on ARM Cortex A9 dual-core processor software design.


2011 ◽  
Vol 14 (4) ◽  
pp. 16-23
Author(s):  
Hung Hoa Nguyen ◽  
Huy Quang Nguyen ◽  
Vu Duc Anh Dinh

The 21st century is the era of Ubiquitous Computing where computing devices are present everywhere in our lives. To satisfy the development of this tendency, many hardware platforms have been proposed for developing Ubiquitous devices. Among them, T-Engine, an open standardized development platform for embedded systems, is one of the most popular platforrms. It is nowadays compatible with embedded equipments for a wide range of fields. In Vietnam, T-Engine has just been introduced for 4 years. However, most of the ubiquitous applications using T-Engine are developed restrictively based on the standard hardware of T-Engine. One issue that arises is the necessity of a solution to expand T-Engine hardware and use it to control automatic systems to satisfy different types of Ubiquitous devices. This research is to propose an approach to use T-Engine in the Ubiquitous Devices that require the attachment of the additional hardware as well as the complicated control mechanism with real time constraints. In this research, we proposed an expanding solution T-Engine through the extension bus. Besides that, we consider the timing problems in bus transaction and problems in real-time programming. A simple robot demonstration has also been designed and implemented to prove the feasibility of our model. This approach will open up a new tendency of developing complicated Ubiquitous devices using T-Engine in Vietnam.


2020 ◽  
Vol 10 (9) ◽  
pp. 3180 ◽  
Author(s):  
Dongfang Dang ◽  
Feng Gao ◽  
Qiuxia Hu

Vehicles are highly coupled and multi-degree nonlinear systems. The establishment of an appropriate vehicle dynamical model is the basis of motion planning for autonomous vehicles. With the development of autonomous vehicles from L2 to L3 and beyond, the automatic driving system is required to make decisions and plans in a wide range of speeds and on bends with large curvature. In order to make precise and high-quality control maneuvers, it is important to account for the effects of dynamical coupling in these working conditions. In this paper, a new single-coupled dynamical model (SDM) is proposed to deal with the various dynamical coupling effects by identifying and simplifying the complicated one. An autonomous vehicle motion planning problem is then formulated using the nonlinear model predictive control theory (NMPC) with the SDM constraint (NMPC-SDM). We validated the NMPC-SDM with hardware-in-the-loop (HIL) experiments to evaluate improvements to control performance by comparing with the planners original design, using the kinematic and single-track models. The comparative results show the superiority of the proposed motion planning algorithm in improving the maneuverability and tracking performance.


2021 ◽  
Vol 257 ◽  
pp. 02061
Author(s):  
Haoru Luo ◽  
Kechun Liu

For autonomous vehicles, autonomous positioning is a core technology in their development. A good positioning system not only helps them efficiently complete autonomous operations, but also improves safety performance. At present, the use of global positioning system (GPS) is a more mainstream positioning method, but in indoor, serious shelter and other environments, GPS signal loss will lead to positioning failure. In order to solve this problem, this paper proposes a method of mapping before positioning, and designs a set of high precision real-time positioning system by combining the technology of multi-sensor fusion. The designed system was carried on a Wuling sightseeing bus, and the mapping and positioning tests were carried out in the Nanhu Campus of Wuhan University of Technology, the East Campus of Mafangshan Campus and the underground garage where GPS signals were lost. The test results show that the system can realize the high precision real-time positioning function of the autonomous vehicle. Therefore, the in-depth study and implementation of this system is of great significance to the promotion and application of the automatic driving industry.


Author(s):  
C. K. Toth ◽  
Z. Koppanyi ◽  
M. G. Lenzano

<p><strong>Abstract.</strong> The ongoing proliferation of remote sensing technologies in the consumer market has been rapidly reshaping the geospatial data acquisition world, and subsequently, the data processing as well as information dissemination processes. Smartphones have clearly established themselves as the primary crowdsourced data generators recently, and provide an incredible volume of remote sensed data with fairly good georeferencing. Besides the potential to map the environment of the smartphone users, they provide information to monitor the dynamic content of the object space. For example, real-time traffic monitoring is one of the most known and widely used real-time crowdsensed application, where the smartphones in vehicles jointly contribute to an unprecedentedly accurate traffic flow estimation. Now we are witnessing another milestone to happen, as driverless vehicle technologies will become another major source of crowdsensed data. Due to safety concerns, the requirements for sensing are higher, as the vehicles should sense other vehicles and the road infrastructure under any condition, not just daylight in favorable weather conditions, and at very fast speed. Furthermore, the sensing is based on using redundant and complementary sensor streams to achieve a robust object space reconstruction, needed to avoid collisions and maintain normal travel patterns. At this point, the remote sensed data in assisted and autonomous vehicles are discarded, or partially recorded for R&amp;amp;D purposes. However, in the long run, as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication technologies mature, recording data will become a common place, and will provide an excellent source of geospatial information for road mapping, traffic monitoring, etc. This paper reviews the key characteristics of crowdsourced vehicle data based on experimental data, and then the processing aspects, including the Data Science and Deep Learning components.</p>


2021 ◽  
Vol 23 (06) ◽  
pp. 1288-1293
Author(s):  
Dr. S. Rajkumar ◽  
◽  
Aklilu Teklemariam ◽  
Addisalem Mekonnen ◽  
◽  
...  

Autonomous Vehicles (AV) reduces human intervention by perceiving the vehicle’s location with respect to the environment. In this regard, utilization of multiple sensors corresponding to various features of environment perception yields not only detection but also enables tracking and classification of the object leading to high security and reliability. Therefore, we propose to deploy hybrid multi-sensors such as Radar, LiDAR, and camera sensors. However, the data acquired with these hybrid sensors overlaps with the wide viewing angles of the individual sensors, and hence convolutional neural network and Kalman Filter (KF) based data fusion framework was implemented with a goal to facilitate a robust object detection system to avoid collisions inroads. The complete system tested over 1000 road scenarios for real-time environment perception showed that our hardware and software configurations outperformed numerous other conventional systems. Hence, this system could potentially find its application in object detection, tracking, and classification in a real-time environment.


Author(s):  
Shiyan Yang ◽  
Jonny Kuo ◽  
Michael G. Lenné

The safety concerns linked to semi-automated driving – more automation, less driver engagement – could be resolved by real-time driver monitoring with mitigation strategies. To achieve this, this paper analyzed an on-road dataset of sequential off-road glance behaviors under different levels of distraction in an autonomous vehicle trial named CANdrive. Several metrics based on sequential off-road glances were proposed and examined in terms of their capacity of measuring the levels of distraction. These findings are useful for the development of high-resolution driver state monitoring to improve safety in the collaboration between human driver and semi-autonomous vehicle.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4357 ◽  
Author(s):  
Babak Shahian Jahromi ◽  
Theja Tulabandhula ◽  
Sabri Cetin

There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception.


Sign in / Sign up

Export Citation Format

Share Document