scholarly journals Fast Drivable Areas Estimation with Multi-Task Learning for Real-Time Autonomous Driving Assistant

2021 ◽  
Vol 11 (22) ◽  
pp. 10713
Author(s):  
Dong-Gyu Lee

Autonomous driving is a safety-critical application that requires a high-level understanding of computer vision with real-time inference. In this study, we focus on the computational efficiency of an important factor by improving the running time and performing multiple tasks simultaneously for practical applications. We propose a fast and accurate multi-task learning-based architecture for joint segmentation of drivable area, lane line, and classification of the scene. An encoder-decoder architecture efficiently handles input frames through shared representation. A comprehensive understanding of the driving environment is improved by generalization and regularization from different tasks. The proposed method learns end-to-end through multi-task learning on a very challenging Berkeley Deep Drive dataset and shows its robustness for three tasks in autonomous driving. Experimental results show that the proposed method outperforms other multi-task learning approaches in both speed and accuracy. The computational efficiency of the method was over 93.81 fps at inference, enabling execution in real-time.

T-Comm ◽  
2020 ◽  
Vol 14 (10) ◽  
pp. 33-38
Author(s):  
Alexander S. Antonenko ◽  
◽  
Andrey N. Zemtsov ◽  

This article describes the IPTV system, as well as its implementation methods and related protocols. The concept of IPTV includes both real-time television and recording television, the so-called VoD. In real time, streaming data is sent using only the RTP protocol and in addition to it, the RTSP protocol is used for streaming VoD. In addition, methods for measuring QoS parameters are analyzed, considering practical applications for estimating IPTV traffic parameters. An important feature of providing quality IPTV services is a high level of quality of service. Also, in theory, an Internet connection model with insufficient network bandwidth is considered. The following characteristics are taken into account: bandwidth, one-way delay, inter-packet jitter, the number of lost packets, the number of duplicated packets, packets with errors, and damaged packets. A reordering issue is mentioned. In addition, two important QoS parameters for VoD are measured: START delay and PAUSE / RESUME delays. Service messaging is considered while providing IPTV service. The maximum, average, and minimum values for the network quality of service parameters are found.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5795 ◽  
Author(s):  
Dat Ngo ◽  
Seungmin Lee ◽  
Gi-Dong Lee ◽  
Bongsoon Kang

In recent years, machine vision algorithms have played an influential role as core technologies in several practical applications, such as surveillance, autonomous driving, and object recognition/localization. However, as almost all such algorithms are applicable to clear weather conditions, their performance is severely affected by any atmospheric turbidity. Several image visibility restoration algorithms have been proposed to address this issue, and they have proven to be a highly efficient solution. This paper proposes a novel method to recover clear images from degraded ones. To this end, the proposed algorithm uses a supervised machine learning-based technique to estimate the pixel-wise extinction coefficients of the transmission medium and a novel compensation scheme to rectify the post-dehazing false enlargement of white objects. Also, a corresponding hardware accelerator implemented on a Field Programmable Gate Array chip is in order for facilitating real-time processing, a critical requirement of practical camera-based systems. Experimental results on both synthetic and real image datasets verified the proposed method’s superiority over existing benchmark approaches. Furthermore, the hardware synthesis results revealed that the accelerator exhibits a processing rate of nearly 271.67 Mpixel/s, enabling it to process 4K videos at 30.7 frames per second in real time.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 30467-30479 ◽  
Author(s):  
Jie Tang ◽  
Shaoshan Liu ◽  
Liangkai Liu ◽  
Bo Yu ◽  
Weisong Shi

Energies ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1361
Author(s):  
Ajaykumar Unagar ◽  
Yuan Tian ◽  
Manuel Arias Chao ◽  
Olga Fink

Lithium-ion (Li-I) batteries have recently become pervasive and are used in many physical assets. For the effective management of the batteries, reliable predictions of the end-of-discharge (EOD) and end-of-life (EOL) are essential. Many detailed electrochemical models have been developed for the batteries. Their parameters are calibrated before they are taken into operation and are typically not re-calibrated during operation. However, the degradation of batteries increases the reality gap between the computational models and the physical systems and leads to inaccurate predictions of EOD/EOL. The current calibration approaches are either computationally expensive (model-based calibration) or require large amounts of ground truth data for degradation parameters (supervised data-driven calibration). This is often infeasible for many practical applications. In this paper, we introduce a reinforcement learning-based framework for reliably inferring calibration parameters of battery models in real time. Most importantly, the proposed methodology does not need any labeled data samples of observations and the ground truth parameters. The experimental results demonstrate that our framework is capable of inferring the model parameters in real time with better accuracy compared to approaches based on unscented Kalman filters. Furthermore, our results show better generalizability than supervised learning approaches even though our methodology does not rely on ground truth information during training.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1960
Author(s):  
Dongwan Kang ◽  
Anthony Wong ◽  
Banghyon Lee ◽  
Jungha Kim

Autonomous vehicles perceive objects through various sensors. Cameras, radar, and LiDAR are generally used as vehicle sensors, each of which has its own characteristics. As examples, cameras are used for a high-level understanding of a scene, radar is applied to weather-resistant distance perception, and LiDAR is used for accurate distance recognition. The ability of a camera to understand a scene has overwhelmingly increased with the recent development of deep learning. In addition, technologies that emulate other sensors using a single sensor are being developed. Therefore, in this study, a LiDAR data-based scene understanding method was developed through deep learning. The approaches to accessing LiDAR data through deep learning are mainly divided into point, projection, and voxel methods. The purpose of this study is to apply a projection method to secure a real-time performance. The convolutional neural network method used by a conventional camera can be easily applied to the projection method. In addition, an adaptive break point detector method used for conventional 2D LiDAR information is utilized to solve the misclassification caused by the conversion from 2D into 3D. The results of this study are evaluated through a comparison with other technologies.


2019 ◽  
Vol 42 (9) ◽  
pp. 508-515
Author(s):  
Ghaith Kadhim Sharba ◽  
Mousa Kadhim Wali ◽  
Ali Hussein AI-Timemy

In every country in the world, there are a number of amputees who have been exposed to some accidents that led to the loss of their upper limbs. The aim of this study is to suggest a system for real-time classification of five classes of shoulder girdle motions for high-level upper limb amputees using a pattern recognition system. In the suggested system, the wavelet transform was utilized for feature extraction, and the extreme learning machine was used as a classifier. The system was tested on four intact-limbed subjects and one amputee, with eight channels involving five electromyography channels and three-axis accelerometer sensor. The study shows that the suggested pattern recognition system has the ability to classify the shoulder girdle motions for high-level upper limb motions with 88.4% average classification accuracy for four intact-limbed subjects and 92.8% classification accuracy for one amputee by combining electromyography and accelerometer channels. The outcomes of this study may suggest that the proposed pattern recognition system can help to provide control signals to drive a prosthetic arm for high-level upper limb amputees.


Sign in / Sign up

Export Citation Format

Share Document