scholarly journals Multimodal interaction-aware motion prediction for autonomous street crossing

2020 ◽  
Vol 39 (13) ◽  
pp. 1567-1598
Author(s):  
Noha Radwan ◽  
Wolfram Burgard ◽  
Abhinav Valada

For mobile robots navigating on sidewalks, the ability to safely cross street intersections is essential. Most existing approaches rely on the recognition of the traffic light signal to make an informed crossing decision. Although these approaches have been crucial enablers for urban navigation, the capabilities of robots employing such approaches are still limited to navigating only on streets that contain signalized intersections. In this article, we address this challenge and propose a multimodal convolutional neural network framework to predict the safety of a street intersection for crossing. Our architecture consists of two subnetworks: an interaction-aware trajectory estimation stream ( interaction-aware temporal convolutional neural network (IA-TCNN)), that predicts the future states of all observed traffic participants in the scene; and a traffic light recognition stream AtteNet. Our IA-TCNN utilizes dilated causal convolutions to model the behavior of all the observable dynamic agents in the scene without explicitly assigning priorities to the interactions among them, whereas AtteNet utilizes squeeze-excitation blocks to learn a content-aware mechanism for selecting the relevant features from the data, thereby improving the noise robustness. Learned representations from the traffic light recognition stream are fused with the estimated trajectories from the motion prediction stream to learn the crossing decision. Incorporating the uncertainty information from both modules enables our architecture to learn a likelihood function that is robust to noise and mispredictions from either subnetworks. Simultaneously, by learning to estimate motion trajectories of the surrounding traffic participants and incorporating knowledge of the traffic light signal, our network learns a robust crossing procedure that is invariant to the type of street intersection. Furthermore, we extend our previously introduced Freiburg Street Crossing dataset with sequences captured at multiple intersections of varying types, demonstrating complex interactions among the traffic participants as well as various lighting and weather conditions. We perform comprehensive experimental evaluations on public datasets as well as our Freiburg Street Crossing dataset, which demonstrate that our network achieves state-of-the-art performance for each of the subtasks, as well as for the crossing safety prediction. Moreover, we deploy the proposed architectural framework on a robotic platform and conduct real-world experiments that demonstrate the suitability of the approach for real-time deployment and robustness to various environments.

2020 ◽  
Vol 16 (2) ◽  
pp. 95-101
Author(s):  
Anna Beinaroviča ◽  
Mikhail Gorobetz ◽  
Ivars Alps

Abstract This study is dedicated to solve manoeuvres making task while working on the station with no marshalling hump. It is part of the project aimed at the development of intelligent safety and optimal control systems of autonomous electric vehicles and transport in general. The main manoeuvres safety depends on the lack of items and other objects on the rails as well as on the position of turnouts. In most cases rails, occupied with other wagons, as well as the wrong position of turnouts are marked with prohibiting red or blue signals of the traffic light. The authors propose an algorithm for the traffic light recognition by using a convolutional neural network (CNN) and traffic light indicator recognition. However, the situation when the locomotive needs to drive on the rails occupied with other wagons, for example, during the manoeuvres on the railway station can also appear. For this purpose, the authors have developed a CNN algorithm for the wagon recognition on the rails.


Author(s):  
Shota Masaki ◽  
Tsubasa Hirakawa ◽  
Takayoshi Yamashita ◽  
Hironobu Fujiyoshi

Traffic light recognition is an important task for automatic driving support systems. Conventional traffic light recognition techniques are categorized into model-based methods, which frequently suffer from environmental changes such as sunlight, and machine-learning-based methods, which have difficulty detecting distant and occluded traffic lights because they fail to represent features efficiently. In this work, we propose a method for recognizing distant traffic lights by utilizing a semantic segmentation for extracting traffic light regions from images and a convolutional neural network (CNN) for classifying the state of the extracted traffic lights. Since semantic segmentation classifies objects pixel by pixel in consideration of the surrounding information, it can successfully detect distant and occluded traffic lights. Experimental results show that the proposed semantic segmentation improves the detection accuracy for distant traffic lights and confirms the accuracy improvement of 12.8 % over the detection accuracy by object detection. In addition, our CNN-based classifier was able to identify the traffic light status more than 30 % more accurately than the color thresholding classification.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document