Deep Visible and Thermal Camera-based Optimal Semantic Segmentation using Semantic Forecasting

Author(s):  
Vijay John ◽  
Seiichi Mita ◽  
Annamalai Lakshmanan ◽  
Ali Boyali ◽  
Simon Thompson

Abstract Visible camera-based semantic segmentation and semantic forecasting are important perception tasks in autonomous driving. In semantic segmentation, the current frame's pixel level labels are estimated using the current visible frame. In semantic forecasting, the future frame's pixel-level labels are predicted using the current and the past visible frames and pixel-level labels. While reporting state-of-the-art accuracy, both of these tasks are limited by the visible camera's susceptibility to varying illumination, adverse weather conditions, sunlight and headlight glare etc. In this work, we propose to address these limitations using the deep sensor fusion of the visible and the thermal camera. The proposed sensor fusion framework performs both semantic forecasting as well as an optimal semantic segmentation within a multi-step iterative framework. In the first or forecasting step, the framework predicts the semantic map for the next frame. The predicted semantic map is updated in the second step, when the next visible and thermal frame is observed. The updated semantic map is considered as the optimal semantic map for the given visible-thermal frame. The semantic map forecasting and updating are iteratively performed over time. The estimated semantic maps contain the pedestrian behavior, the free space and the pedestrian crossing labels. The pedestrian behavior is categorized based on their spatial, motion and dynamic orientation information. The proposed framework is validated using the public KAIST dataset. A detailed comparative analysis and ablation study is performed using pixel-level classification and IOU error metrics. The results show that the proposed framework can not only accurately forecast the semantic segmentation map but also accurately update them.

2019 ◽  
Vol 9 (14) ◽  
pp. 2843 ◽  
Author(s):  
Pierre Duthon ◽  
Michèle Colomb ◽  
Frédéric Bernardin

Autonomous driving is based on innovative technologies that have to ensure that vehicles are driven safely. LiDARs are one of the reference sensors for obstacle detection. However, this technology is affected by adverse weather conditions, especially fog. Different wavelengths are investigated to meet this challenge (905 nm vs. 1550 nm). The influence of wavelength on light transmission in fog is then examined and results reported. A theoretical approach by calculating the extinction coefficient for different wavelengths is presented in comparison to measurements with a spectroradiometer in the range of 350 nm–2450 nm. The experiment took place in the French Cerema PAVIN BPplatform for intelligent vehicles, which makes it possible to reproduce controlled fogs of different density for two types of droplet size distribution. Direct spectroradiometer extinction measurements vary in the same way as the models. Finally, the wavelengths for LiDARs should not be chosen on the basis of fog conditions: there is a small difference (<10%) between the extinction coefficients at 905 nm and 1550 nm for the same emitted power in fog.


2021 ◽  
Vol 9 (2) ◽  
pp. 731-739
Author(s):  
M Hyndhavi, Et. al.

The development of vehicle tracking using sensor fusion is presented in this paper. Advanced driver assistance systems (ADAS) are becoming more popular in recent years. These systems use sensor information for real-time control. To improve the standard and robustness, especially in the presence of environmental noises like varying lighting, weather conditions, and fusion of sensors has been the center of attention in recent studies. Faced with complex traffic conditions, the single sensor has been unable to meet the security requirements of ADAS and autonomous driving. The common environment perception sensors consist of radar, camera, and lidar which have both pros and cons. The sensor fusion is a necessary technology for autonomous driving which provides a better vision and understanding of vehicles surrounding. We mainly focus on highway scenarios that enable an autonomous car to comfortably follow other cars at various speeds while keeping a secure distance and mix the advantages of both sensors with a sensor fusion approach. The radar and vision sensor information are fused to produce robust and accurate measurements. And the experimental results indicate that the comparison of using only radar sensors and sensor fusion of both camera and radar sensors is presented in this paper. The algorithm is described along with simulation results by using MATLAB.


2017 ◽  
Vol 36 (3) ◽  
pp. 292-319 ◽  
Author(s):  
Ryan W Wolcott ◽  
Ryan M Eustice

This paper reports on a fast multiresolution scan matcher for local vehicle localization of self-driving cars. State-of-the-art approaches to vehicle localization rely on observing road surface reflectivity with a 3D light detection and ranging (LIDAR) scanner to achieve centimeter-level accuracy. However, these approaches can often fail when faced with adverse weather conditions that obscure the view of the road paint (e.g. puddles and snowdrifts), poor road surface texture, or when road appearance degrades over time. We present a generic probabilistic method for localizing an autonomous vehicle equipped with a three-dimensional (3D) LIDAR scanner. This proposed algorithm models the world as a mixture of several Gaussians, characterizing the [Formula: see text]-height and reflectivity distribution of the environment—which we rasterize to facilitate fast and exact multiresolution inference. Results are shown on a collection of datasets totaling over 500 km of road data covering highway, rural, residential, and urban roadways, in which we demonstrate our method to be robust through heavy snowfall and roadway repavements.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3388
Author(s):  
Atle Aalerud ◽  
Joacim Dybedal ◽  
Dipendra Subedi

This paper describes the first simulations and experimental results of a novel segmented Light Detection And Ranging (LiDAR) reflector. Large portions of the rotating LiDAR data are typically discarded due to occlusion or a misplaced field of view (FOV). The proposed reflector solves this problem by reflecting the entire FOV of the rotating LiDAR towards a target. Optical simulation results, using Zemax OpticStudio, suggest that adding a reflector reduces the range of the embedded LiDAR with only 3.9 %. Furthermore, pattern simulation results show that a radially reshaped FOV can be configured to maximize point cloud density, maximize coverage, or a combination. Here, the maximum density is defined by the number of mirror segments in the reflector. Finally, a prototype was used for validation. Intensity, Euclidean error, and sample standard deviation were evaluated and, except for reduced-intensity values, no significant reduction in the LiDAR’s performance was found. Conversely, the number of usable measurements increased drastically. The mirrors of the reflector give the LiDAR multiple viewpoints to the target. Ultimately, it is argued that this can enhance the object revisit rate, instantaneous resolution, object classification range, and robustness against occlusion and adverse weather conditions. Consequently, the reflector design enables long-range rotating LiDARs to achieve the robust super-resolution needed for autonomous driving at highway speeds.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7461
Author(s):  
Jisoo Kim ◽  
Bum-jin Park ◽  
Chang-gyun Roh ◽  
Youngmin Kim

The performance of LiDAR sensors deteriorates under adverse weather conditions such as rainfall. However, few studies have empirically analyzed this phenomenon. Hence, we investigated differences in sensor data due to environmental changes (distance from objects (road signs), object material, vehicle (sensor) speed, and amount of rainfall) during LiDAR sensing of road facilities. The indicators used to verify the performance of LiDAR were numbers of point cloud (NPC) and intensity. Differences in the indicators were tested through a two-way ANOVA. First, both NPC and intensity increased with decreasing distance. Second, despite some exceptions, changes in speed did not affect the indicators. Third, the values of NPC do not differ depending on the materials and the intensity of each material followed the order aluminum > steel > plastic > wood, although exceptions were found. Fourth, with an increase in rainfall, both indicators decreased for all materials; specifically, under rainfall of 40 mm/h or more, a substantial reduction was observed. These results demonstrate that LiDAR must overcome the challenges posed by inclement weather to be applicable in the production of road facilities that improve the effectiveness of autonomous driving sensors.


Author(s):  
Mirsad Kulović ◽  
Slavko Davidović

Pedestrians represent the most vulnerable category of participants in traffic. More and more complex traffic conditions in cities across Europe, and therefore BiH, threaten traffic to become a challenge for pedestrians, and pedestrians often experience traffic as a challenge. Studies of behavior of pedestrians at signalized pedestrian crossings conclude that there is a high level of insecurity and a high percentage of unsafe crossings by pedestrians. Timers that add pedestrian signals indicate the length of the red light, the remaining time to the beginning of the green light for the safe crossing of pedestrians across the street. This paper analyzes the effect of the countdown pedestrian signals - CPSs in different weather conditions, ie the comparison of pedestrian behavior (switching to red light) without CPSs and with CPSs in different weather conditions (sun, snow, rain, no precipitation with a temperature of 0 degrees) was performed. The paper analyzes a traffic light pedestrian crossing over the road that consists of four traffic lanes in Banja Luka, BiH.


Sign in / Sign up

Export Citation Format

Share Document