Performance Test of Autonomous Vehicle Lidar Sensors Under Different Weather Conditions

Author(s):  
Li Tang ◽  
Yunpeng Shi ◽  
Qing He ◽  
Adel W. Sadek ◽  
Chunming Qiao

This paper intends to analyze the Light Detection and Ranging (Lidar) sensor performance on detecting pedestrians under different weather conditions. Lidar sensor is the key sensor in autonomous vehicles, which can provide high-resolution object information. Thus, it is important to analyze the performance of Lidar. This paper involves an autonomous bus operating several pedestrian detection tests in a parking lot at the University at Buffalo. By comparing the pedestrian detection results on rainy days with the results on sunny days, the evidence shows that the rain can cause unstable performance and even failures of Lidar sensors to detect pedestrians in time. After analyzing the test data, three logit models are built to estimate the probability of Lidar detection failure. The rainy weather still plays an important role in affecting Lidar detection performance. Moreover, the distance between a vehicle and a pedestrian, as well as the autonomous vehicle velocity, are also important. This paper can provide a way to improve the Lidar detection performance in autonomous vehicles.

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7267
Author(s):  
Luiz G. Galvao ◽  
Maysam Abbod ◽  
Tatiana Kalganova ◽  
Vasile Palade ◽  
Md Nazmul Huda

Autonomous Vehicles (AVs) have the potential to solve many traffic problems, such as accidents, congestion and pollution. However, there are still challenges to overcome, for instance, AVs need to accurately perceive their environment to safely navigate in busy urban scenarios. The aim of this paper is to review recent articles on computer vision techniques that can be used to build an AV perception system. AV perception systems need to accurately detect non-static objects and predict their behaviour, as well as to detect static objects and recognise the information they are providing. This paper, in particular, focuses on the computer vision techniques used to detect pedestrians and vehicles. There have been many papers and reviews on pedestrians and vehicles detection so far. However, most of the past papers only reviewed pedestrian or vehicle detection separately. This review aims to present an overview of the AV systems in general, and then review and investigate several detection computer vision techniques for pedestrians and vehicles. The review concludes that both traditional and Deep Learning (DL) techniques have been used for pedestrian and vehicle detection; however, DL techniques have shown the best results. Although good detection results have been achieved for pedestrians and vehicles, the current algorithms still struggle to detect small, occluded, and truncated objects. In addition, there is limited research on how to improve detection performance in difficult light and weather conditions. Most of the algorithms have been tested on well-recognised datasets such as Caltech and KITTI; however, these datasets have their own limitations. Therefore, this paper recommends that future works should be implemented on more new challenging datasets, such as PIE and BDD100K.


2018 ◽  
Vol 7 (2.30) ◽  
pp. 39 ◽  
Author(s):  
Neeru Mago ◽  
Dr Satish Kumar

In recent years, it has been observed that it becomes time-consuming and cumbersome job to find a vacant parking lot, especially in urban areas. Thus, it makes difficult for potential visitors or customers to search a vacant space for parking their vehicles and keeps on revolving round the parking area which not only increases frustration level but also wastes time and energy. In order to get an optimal parking lot immediately, there is a requirement of an efficient car-park routing systems. Current systems detecting vacant parking lots are either based on very expensive sensor based technology; or based on video based technologies which do not consider various weather conditions like sunny, cloudy and rainy weather. In the proposed work, a hybrid model is designed for detecting outdoor parking which detects the empty spaces available in the parking lots and the spaces/ slots getting vacant in the real-time scenario. This model is based on training, validating and testing the images (dataset) collected from various heights and angles of different parking areas stored in the repository. In this research, more advanced feature extractors and machine learning algorithms are evaluated in order to find the vacant parking lots in the outdoor park-ing areas.  


2020 ◽  
Vol 17 (6) ◽  
pp. 172988142097227
Author(s):  
Thomas Andzi-Quainoo Tawiah

Autonomous vehicles include driverless, self-driving and robotic cars, and other platforms capable of sensing and interacting with its environment and navigating without human help. On the other hand, semiautonomous vehicles achieve partial realization of autonomy with human intervention, for example, in driver-assisted vehicles. Autonomous vehicles first interact with their surrounding using mounted sensors. Typically, visual sensors are used to acquire images, and computer vision techniques, signal processing, machine learning, and other techniques are applied to acquire, process, and extract information. The control subsystem interprets sensory information to identify appropriate navigation path to its destination and action plan to carry out tasks. Feedbacks are also elicited from the environment to improve upon its behavior. To increase sensing accuracy, autonomous vehicles are equipped with many sensors [light detection and ranging (LiDARs), infrared, sonar, inertial measurement units, etc.], as well as communication subsystem. Autonomous vehicles face several challenges such as unknown environments, blind spots (unseen views), non-line-of-sight scenarios, poor performance of sensors due to weather conditions, sensor errors, false alarms, limited energy, limited computational resources, algorithmic complexity, human–machine communications, size, and weight constraints. To tackle these problems, several algorithmic approaches have been implemented covering design of sensors, processing, control, and navigation. The review seeks to provide up-to-date information on the requirements, algorithms, and main challenges in the use of machine vision–based techniques for navigation and control in autonomous vehicles. An application using land-based vehicle as an Internet of Thing-enabled platform for pedestrian detection and tracking is also presented.


2021 ◽  
Vol 9 ◽  
Author(s):  
Abhishek Sharma ◽  
Sushank Chaudhary ◽  
Jyoteesh Malhotra ◽  
Muhammad Saadi ◽  
Sattam Al Otaibi ◽  
...  

In recent years, there have been plenty of demands and growth in the autonomous vehicle industry, and thus, challenges of designing highly efficient photonic radars that can detect and range any target with the resolution of a few centimeters have been encountered. The existing radar technology is unable to meet such requirements due to limitations on available bandwidth. Another issue is to consider strong attenuation while working under diverse atmospheric conditions at higher frequencies. The proposed model of photonic radar is developed considering these requirements and challenges using the frequency-modulated direct detection technique and considering a free-space range of 750 m. The result depicts improved range detection in terms of received power and an acceptable signal-to-noise ratio and range under adverse climatic situations.


2020 ◽  
Vol 14 (1) ◽  
pp. 164-173
Author(s):  
Yair Wiseman

Background: An autonomous vehicle will go unaccompanied to park itself in a remote parking lot without a driver or a passenger inside. Unlike traditional vehicles, an autonomous vehicle can drop passengers off near any location. Afterward, instead of cruising for a nearby free parking, the vehicle can be automatically parked in a remote parking lot which can be in a rural fringe of the city where inexpensive land is more readily available. Objective: The study aimed at avoidance of mistakes in the identification of the vehicle with the help of the automatic identification device. Methods: It is proposed to back up license plate identification procedure by making use of three distinct identification techniques: RFID, Bluetooth and OCR with the aim of considerably reducing identification mistakes. Results: The RFID is the most reliable identification device but the Bluetooth and the OCR can improve the reliability of RFID. Conclusion: A very high level of reliable vehicle identification device is achievable. Parking lots for autonomous vehicles can be very efficient and low-priced. The critical difficulty is to automatically make sure that the autonomous vehicle is correctly identified at the gate.


Author(s):  
Guoqiang Chen ◽  
Mengchao Liu ◽  
Hongpeng Zhou ◽  
Bingxin Bai

Background: The vehicle pose detection plays an important role in monitoring vehicle behavior and the parking situation. The real-time detection of vehicle pose with high accuracy is of great importance. Objective: The goal of the work is to construct a new network to detect the vehicle angle based on the regression Convolutional Neural Network (CNN). The main contribution is that several traditional regression CNNs are combined as the Multi-Collaborative Regression CNN (MCR-CNN), which greatly enhances the vehicle angle detection precision and eliminates the abnormal detection error. Methods: Two challenges with respect to the traditional regression CNN have been revealed in detecting the vehicle pose angle. The first challenge is the detection failure resulting from the conversion of the periodic angle to the linear angle, while the second is the big detection error if the training sample value is very small. An MCR-CNN is proposed to solve the first challenge. And a 2- stage method is proposed to solve the second challenge. The architecture of the MCR-CNN is designed in detail. After the training and testing data sets are constructed, the MCR-CNN is trained and tested for vehicle angle detection. Results: The experimental results show that the testing samples with the error below 4° account for 95% of the total testing samples based on the proposed MCR-CNN. The MCR-CNN has significant advantages over the traditional vehicle pose detection method. Conclusion: The proposed MCR-CNN cannot only detect the vehicle angle in real-time, but also has a very high detection accuracy and robustness. The proposed approach can be used for autonomous vehicles and monitoring of the parking lot.


2019 ◽  
Vol 9 (11) ◽  
pp. 2335 ◽  
Author(s):  
Sarfraz Ahmed ◽  
M. Nazmul Huda ◽  
Sujan Rajbhandari ◽  
Chitta Saha ◽  
Mark Elshaw ◽  
...  

As autonomous vehicles become more common on the roads, their advancement draws on safety concerns for vulnerable road users, such as pedestrians and cyclists. This paper presents a review of recent developments in pedestrian and cyclist detection and intent estimation to increase the safety of autonomous vehicles, for both the driver and other road users. Understanding the intentions of the pedestrian/cyclist enables the self-driving vehicle to take actions to avoid incidents. To make this possible, development of methods/techniques, such as deep learning (DL), for the autonomous vehicle will be explored. For example, the development of pedestrian detection has been significantly advanced using DL approaches, such as; Fast Region-Convolutional Neural Network (R-CNN) , Faster R-CNN and Single Shot Detector (SSD). Although DL has been around for several decades, the hardware to realise the techniques have only recently become viable. Using these DL methods for pedestrian and cyclist detection and applying it for the tracking, motion modelling and pose estimation can allow for a successful and accurate method of intent estimation for the vulnerable road users. Although there has been a growth in research surrounding the study of pedestrian detection using vision-based approaches, further attention should include focus on cyclist detection. To further improve safety for these vulnerable road users (VRUs), approaches such as sensor fusion and intent estimation should be investigated.


Webology ◽  
2021 ◽  
Vol 18 (05) ◽  
pp. 1176-1183
Author(s):  
Thylashri S ◽  
Manikandaprabu N ◽  
Jayakumar T ◽  
Vijayachitra S ◽  
Kiruthiga G

Pedestrians are essential objects in computer vision. Pedestrian detection in images or videos plays an important role in many applications such as real-time monitoring, counting pedestrians at various events, detecting falls of the elderly, etc. It is formulated as a problem of the automatic identification and location of pedestrians in pictures or videos. In real images, the art of pedestrian detection is an important task for major applications such as video surveillance, autonomous driving systems, etc. Pedestrian detection is also an important feature of the autonomous vehicle driving system because it identifies pedestrians and minimizes accidents between vehicles and pedestrians. The research trend in the field of vehicle electronics and driving safety, vision-based pedestrian recognition technologies for smart vehicles have established themselves loudly or slowing down the vehicle. In general, the visual pedestrian detection progression capable of be busted down into three consecutive steps: pedestrian detection, pedestrian recognition, and pedestrian tracking. There is also visual pedestrian recognition in the vehicle. Finally, we study the challenges and evolution of research in the future.


Author(s):  
C. K. Toth ◽  
Z. Koppanyi ◽  
M. G. Lenzano

<p><strong>Abstract.</strong> The ongoing proliferation of remote sensing technologies in the consumer market has been rapidly reshaping the geospatial data acquisition world, and subsequently, the data processing as well as information dissemination processes. Smartphones have clearly established themselves as the primary crowdsourced data generators recently, and provide an incredible volume of remote sensed data with fairly good georeferencing. Besides the potential to map the environment of the smartphone users, they provide information to monitor the dynamic content of the object space. For example, real-time traffic monitoring is one of the most known and widely used real-time crowdsensed application, where the smartphones in vehicles jointly contribute to an unprecedentedly accurate traffic flow estimation. Now we are witnessing another milestone to happen, as driverless vehicle technologies will become another major source of crowdsensed data. Due to safety concerns, the requirements for sensing are higher, as the vehicles should sense other vehicles and the road infrastructure under any condition, not just daylight in favorable weather conditions, and at very fast speed. Furthermore, the sensing is based on using redundant and complementary sensor streams to achieve a robust object space reconstruction, needed to avoid collisions and maintain normal travel patterns. At this point, the remote sensed data in assisted and autonomous vehicles are discarded, or partially recorded for R&amp;amp;D purposes. However, in the long run, as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication technologies mature, recording data will become a common place, and will provide an excellent source of geospatial information for road mapping, traffic monitoring, etc. This paper reviews the key characteristics of crowdsourced vehicle data based on experimental data, and then the processing aspects, including the Data Science and Deep Learning components.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Rahee Walambe ◽  
Aboli Marathe ◽  
Ketan Kotecha ◽  
George Ghinea

The computer vision systems driving autonomous vehicles are judged by their ability to detect objects and obstacles in the vicinity of the vehicle in diverse environments. Enhancing this ability of a self-driving car to distinguish between the elements of its environment under adverse conditions is an important challenge in computer vision. For example, poor weather conditions like fog and rain lead to image corruption which can cause a drastic drop in object detection (OD) performance. The primary navigation of autonomous vehicles depends on the effectiveness of the image processing techniques applied to the data collected from various visual sensors. Therefore, it is essential to develop the capability to detect objects like vehicles and pedestrians under challenging conditions such as like unpleasant weather. Ensembling multiple baseline deep learning models under different voting strategies for object detection and utilizing data augmentation to boost the models’ performance is proposed to solve this problem. The data augmentation technique is particularly useful and works with limited training data for OD applications. Furthermore, using the baseline models significantly speeds up the OD process as compared to the custom models due to transfer learning. Therefore, the ensembling approach can be highly effective in resource-constrained devices deployed for autonomous vehicles in uncertain weather conditions. The applied techniques demonstrated an increase in accuracy over the baseline models and were able to identify objects from the images captured in the adverse foggy and rainy weather conditions. The applied techniques demonstrated an increase in accuracy over the baseline models and reached 32.75% mean average precision (mAP) and 52.56% average precision (AP) in detecting cars in the adverse fog and rain weather conditions present in the dataset. The effectiveness of multiple voting strategies for bounding box predictions on the dataset is also demonstrated. These strategies help increase the explainability of object detection in autonomous systems and improve the performance of the ensemble techniques over the baseline models.


Sign in / Sign up

Export Citation Format

Share Document