A Travel-Efficient Driving Assistance Scheme in VANETs by Providing Recommended Speed

Author(s):  
Chunxiao LI ◽  
Weijia CHEN ◽  
Dawei HE ◽  
Xuelong HU ◽  
Shigeru SHIMAMOTO
Keyword(s):  
2004 ◽  
Author(s):  
Christian Collet ◽  
S. Champely ◽  
Claire Petit ◽  
Claire Renault ◽  
A. Dittmar

2017 ◽  
Author(s):  
Mohamad Fauzi Zakaria ◽  
Tan Jiah Soon ◽  
Munzilah Md Rohani

Author(s):  
Manolo Dulva Hina ◽  
Hongyu Guan ◽  
Assia Soukane ◽  
Amar Ramdane-Cherif

Advanced driving assistance system (ADAS) is an electronic system that helps the driver navigate roads safely. A typical ADAS, however, is suited to specific brands of vehicle and, due to proprietary restrictions, has non-extendable features. Project CASA is an alternative, low-cost generic ADAS. It is an app deployable on smartphone or tablet. The real-time data needed by the app to make sense of its environment are stored in the vehicle or on the cloud, and are accessible as web services. They are used to determine the current driving context, and, if needed, decide actions to prevent an accident or keep road navigation safe. Project CASA is an undertaking of a consortium of industrial and academic partners. A use case scenario is tested in the laboratory (virtual) and on the road (actual) to validate the appropriateness of CASA. It is a contribution to safe driving. CASA’s contribution also lies in its approach in the semantic modeling of the context of the environment, the vehicle and the driver, and on the modeling of rules for fusion of data and fission process yielding an action to be implemented. In addition, CASA proposes a secured means of transmitting data using light, via light fidelity (LiFi), itself an alternative means of wireless vehicle–smartphone communication.


2018 ◽  
Vol 4 (10) ◽  
pp. 116 ◽  
Author(s):  
Robail Yasrab

This research presents the idea of a novel fully-Convolutional Neural Network (CNN)-based model for probabilistic pixel-wise segmentation, titled Encoder-decoder-based CNN for Road-Scene Understanding (ECRU). Lately, scene understanding has become an evolving research area, and semantic segmentation is the most recent method for visual recognition. Among vision-based smart systems, the driving assistance system turns out to be a much preferred research topic. The proposed model is an encoder-decoder that performs pixel-wise class predictions. The encoder network is composed of a VGG-19 layer model, while the decoder network uses 16 upsampling and deconvolution units. The encoder of the network has a very flexible architecture that can be altered and trained for any size and resolution of images. The decoder network upsamples and maps the low-resolution encoder’s features. Consequently, there is a substantial reduction in the trainable parameters, as the network recycles the encoder’s pooling indices for pixel-wise classification and segmentation. The proposed model is intended to offer a simplified CNN model with less overhead and higher performance. The network is trained and tested on the famous road scenes dataset CamVid and offers outstanding outcomes in comparison to similar early approaches like FCN and VGG16 in terms of performance vs. trainable parameters.


Sign in / Sign up

Export Citation Format

Share Document