Study on Test Scenarios of Environment Perception System under Rear-End Collision Risk

Author(s):  
Lin Liu ◽  
Xichan Zhu ◽  
Zhixiong Ma
Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4350 ◽  
Author(s):  
Julie Foucault ◽  
Suzanne Lesecq ◽  
Gabriela Dudnik ◽  
Marc Correvon ◽  
Rosemary O’Keeffe ◽  
...  

Environment perception is crucial for the safe navigation of vehicles and robots to detect obstacles in their surroundings. It is also of paramount interest for navigation of human beings in reduced visibility conditions. Obstacle avoidance systems typically combine multiple sensing technologies (i.e., LiDAR, radar, ultrasound and visual) to detect various types of obstacles under different lighting and weather conditions, with the drawbacks of a given technology being offset by others. These systems require powerful computational capability to fuse the mass of data, which limits their use to high-end vehicles and robots. INSPEX delivers a low-power, small-size and lightweight environment perception system that is compatible with portable and/or wearable applications. This requires miniaturizing and optimizing existing range sensors of different technologies to meet the user’s requirements in terms of obstacle detection capabilities. These sensors consist of a LiDAR, a time-of-flight sensor, an ultrasound and an ultra-wideband radar with measurement ranges respectively of 10 m, 4 m, 2 m and 10 m. Integration of a data fusion technique is also required to build a model of the user’s surroundings and provide feedback about the localization of harmful obstacles. As primary demonstrator, the INSPEX device will be fixed on a white cane.


Author(s):  
Ran Duan ◽  
Shuangyue Yu ◽  
Guang Yue ◽  
Richard Foulds ◽  
Chen Feng ◽  
...  

Wearable environment perception system has the great potential for improving the autonomous control of mobility aids [1]. A visual perception system could provide abundant information of surroundings to assist the task-oriented control such as navigation, obstacle avoidance, object detection, etc., which are essential functions for the wearers who are visually impaired or blind [2, 3, 4]. Moreover, a vision-based terrain sensing is a critical input to the decision-making for the intelligent control system. Especially for the users who find difficulties in manually achieving a seamless control model transition.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Yuren Chen ◽  
Xinyi Xie ◽  
Bo Yu ◽  
Yi Li ◽  
Kunhui Lin

The multitarget vehicle tracking and motion state estimation are crucial for controlling the host vehicle accurately and preventing collisions. However, current multitarget tracking methods are inconvenient to deal with multivehicle issues due to the dynamically complex driving environment. Driving environment perception systems, as an indispensable component of intelligent vehicles, have the potential to solve this problem from the perspective of image processing. Thus, this study proposes a novel driving environment perception system of intelligent vehicles by using deep learning methods to track multitarget vehicles and estimate their motion states. Firstly, a panoramic segmentation neural network that supports end-to-end training is designed and implemented, which is composed of semantic segmentation and instance segmentation. A depth calculation model of the driving environment is established by adding a depth estimation branch to the feature extraction and fusion module of the panoramic segmentation network. These deep neural networks are trained and tested in the Mapillary Vistas Dataset and the Cityscapes Dataset, and the results showed that these methods performed well with high recognition accuracy. Then, Kalman filtering and Hungarian algorithm are used for the multitarget vehicle tracking and motion state estimation. The effectiveness of this method is tested by a simulation experiment, and results showed that the relative relation (i.e., relative speed and distance) between multiple vehicles can be estimated accurately. The findings of this study can contribute to the development of intelligent vehicles to alert drivers to possible danger, assist drivers’ decision-making, and improve traffic safety.


Author(s):  
Sergio Nogueira ◽  
Yassine Ruichek ◽  
Franck Gechter ◽  
Abderrafiaa Koukam ◽  
Francois Charpillet

Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 648 ◽  
Author(s):  
Francisca Rosique ◽  
Pedro J. Navarro ◽  
Carlos Fernández ◽  
Antonio Padilla

This paper presents a systematic review of the perception systems and simulators for autonomous vehicles (AV). This work has been divided into three parts. In the first part, perception systems are categorized as environment perception systems and positioning estimation systems. The paper presents the physical fundamentals, principle functioning, and electromagnetic spectrum used to operate the most common sensors used in perception systems (ultrasonic, RADAR, LiDAR, cameras, IMU, GNSS, RTK, etc.). Furthermore, their strengths and weaknesses are shown, and the quantification of their features using spider charts will allow proper selection of different sensors depending on 11 features. In the second part, the main elements to be taken into account in the simulation of a perception system of an AV are presented. For this purpose, the paper describes simulators for model-based development, the main game engines that can be used for simulation, simulators from the robotics field, and lastly simulators used specifically for AV. Finally, the current state of regulations that are being applied in different countries around the world on issues concerning the implementation of autonomous vehicles is presented.


Sensors ◽  
2012 ◽  
Vol 12 (9) ◽  
pp. 12386-12404 ◽  
Author(s):  
Long Chen ◽  
Qingquan Li ◽  
Ming Li ◽  
Liang Zhang ◽  
Qingzhou Mao

Author(s):  
C. Albrecht ◽  
S. Kraus ◽  
U. Stilla

Abstract. In this paper, we demonstrate the inclusion of a top-view camera system mounted on a city bus in an existing sensor setup. A novel sensor setup with five down-facing cameras is mounted on the roof of a MAN Lion’s City 12 city bus to extract landmarks in road scene images. Its positioning is validated by an exemplary detection of lane markings. The concept for further landmark detection with the help of the presented camera system is explained in this paper and sensor data fusion methods are proposed. Based on our previous findings (Albrecht et al., 2019), strengths of the novel sensor system are introduced to improve the current environment perception system. For now, only a qualitative observation of the capability to detect lane markings and other landmarks can be presented. Future work will use the current findings for landmark detection for a vehicle self-localization system.


Sign in / Sign up

Export Citation Format

Share Document