scholarly journals A review of algorithms and techniques for image-based recognition and inference in mobile robotic systems

2020 ◽  
Vol 17 (6) ◽  
pp. 172988142097227
Author(s):  
Thomas Andzi-Quainoo Tawiah

Autonomous vehicles include driverless, self-driving and robotic cars, and other platforms capable of sensing and interacting with its environment and navigating without human help. On the other hand, semiautonomous vehicles achieve partial realization of autonomy with human intervention, for example, in driver-assisted vehicles. Autonomous vehicles first interact with their surrounding using mounted sensors. Typically, visual sensors are used to acquire images, and computer vision techniques, signal processing, machine learning, and other techniques are applied to acquire, process, and extract information. The control subsystem interprets sensory information to identify appropriate navigation path to its destination and action plan to carry out tasks. Feedbacks are also elicited from the environment to improve upon its behavior. To increase sensing accuracy, autonomous vehicles are equipped with many sensors [light detection and ranging (LiDARs), infrared, sonar, inertial measurement units, etc.], as well as communication subsystem. Autonomous vehicles face several challenges such as unknown environments, blind spots (unseen views), non-line-of-sight scenarios, poor performance of sensors due to weather conditions, sensor errors, false alarms, limited energy, limited computational resources, algorithmic complexity, human–machine communications, size, and weight constraints. To tackle these problems, several algorithmic approaches have been implemented covering design of sensors, processing, control, and navigation. The review seeks to provide up-to-date information on the requirements, algorithms, and main challenges in the use of machine vision–based techniques for navigation and control in autonomous vehicles. An application using land-based vehicle as an Internet of Thing-enabled platform for pedestrian detection and tracking is also presented.

Author(s):  
Li Tang ◽  
Yunpeng Shi ◽  
Qing He ◽  
Adel W. Sadek ◽  
Chunming Qiao

This paper intends to analyze the Light Detection and Ranging (Lidar) sensor performance on detecting pedestrians under different weather conditions. Lidar sensor is the key sensor in autonomous vehicles, which can provide high-resolution object information. Thus, it is important to analyze the performance of Lidar. This paper involves an autonomous bus operating several pedestrian detection tests in a parking lot at the University at Buffalo. By comparing the pedestrian detection results on rainy days with the results on sunny days, the evidence shows that the rain can cause unstable performance and even failures of Lidar sensors to detect pedestrians in time. After analyzing the test data, three logit models are built to estimate the probability of Lidar detection failure. The rainy weather still plays an important role in affecting Lidar detection performance. Moreover, the distance between a vehicle and a pedestrian, as well as the autonomous vehicle velocity, are also important. This paper can provide a way to improve the Lidar detection performance in autonomous vehicles.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1159
Author(s):  
Tao Wu ◽  
Jun Hu ◽  
Lei Ye ◽  
Kai Ding

Pedestrian detection plays an essential role in the navigation system of autonomous vehicles. Multisensor fusion-based approaches are usually used to improve detection performance. In this study, we aimed to develop a score fusion-based pedestrian detection algorithm by integrating the data of two light detection and ranging systems (LiDARs). We first evaluated a two-stage object-detection pipeline for each LiDAR, including object proposal and fine classification. The scores from these two different classifiers were then fused to generate the result using the Bayesian rule. To improve proposal performance, we applied two features: the central points density feature, which acts as a filter to speed up the process and reduce false alarms; and the location feature, including the density distribution and height difference distribution of the point cloud, which describes an object’s profile and location in a sliding window. Extensive experiments tested in KITTI and the self-built dataset show that our method could produce highly accurate pedestrian detection results in real-time. The proposed method not only considers the accuracy and efficiency but also the flexibility for different modalities.


2021 ◽  
Vol 13 (10) ◽  
pp. 5685
Author(s):  
Panbo Guan ◽  
Hanyu Zhang ◽  
Zhida Zhang ◽  
Haoyuan Chen ◽  
Weichao Bai ◽  
...  

Under the Air Pollution Prevention and Control Action Plan (APPCAP) implemented, China has witnessed an air quality change during the past five years, yet the main influence factors remain relatively unexplored. Taking the Beijing-Tianjin-Hebei (BTH) and Yangtze River Delta (YRD) regions as typical cluster cities, the Weather Research Forecasting (WRF) and Comprehensive Air Quality Model with Extension (CAMx) were introduced to demonstrate the meteorological and emission contribution and PM2.5 flux distribution. The results showed that the PM2.5 concentration in BTH and YRD significantly declined with a descend ratio of −39.6% and −28.1%, respectively. For the meteorological contribution, those regions had a similar tendency with unfavorable conditions in 2013–2015 (contribution concentration 1.6–3.8 μg/m3 and 1.1–3.6 μg/m3) and favorable in 2016 (contribution concentration −1.5 μg/m3 and −0.2 μg/m3). Further, the absolute value of the net flux’s intensity was positively correlated with the degree of the favorable/unfavorable weather conditions. When it came to emission intensity, the total net inflow flux increased, and the outflow flux decreased significantly across the border with the emission increasing. In short: the aforementioned results confirmed the effectiveness of the regional joint emission control and provided scientific support for the proposed effective joint control measures.


2021 ◽  
Vol 1 (3) ◽  
pp. 672-685
Author(s):  
Shreya Lohar ◽  
Lei Zhu ◽  
Stanley Young ◽  
Peter Graf ◽  
Michael Blanton

This study reviews obstacle detection technologies in vegetation for autonomous vehicles or robots. Autonomous vehicles used in agriculture and as lawn mowers face many environmental obstacles that are difficult to recognize for the vehicle sensor. This review provides information on choosing appropriate sensors to detect obstacles through vegetation, based on experiments carried out in different agricultural fields. The experimental setup from the literature consists of sensors placed in front of obstacles, including a thermal camera; red, green, blue (RGB) camera; 360° camera; light detection and ranging (LiDAR); and radar. These sensors were used either in combination or single-handedly on agricultural vehicles to detect objects hidden inside the agricultural field. The thermal camera successfully detected hidden objects, such as barrels, human mannequins, and humans, as did LiDAR in one experiment. The RGB camera and stereo camera were less efficient at detecting hidden objects compared with protruding objects. Radar detects hidden objects easily but lacks resolution. Hyperspectral sensing systems can identify and classify objects, but they consume a lot of storage. To obtain clearer and more robust data of hidden objects in vegetation and extreme weather conditions, further experiments should be performed for various climatic conditions combining active and passive sensors.


Author(s):  
J. Schachtschneider ◽  
C. Brenner

Abstract. The development of automated and autonomous vehicles requires highly accurate long-term maps of the environment. Urban areas contain a large number of dynamic objects which change over time. Since a permanent observation of the environment is impossible and there will always be a first time visit of an unknown or changed area, a map of an urban environment needs to model such dynamics.In this work, we use LiDAR point clouds from a large long term measurement campaign to investigate temporal changes. The data set was recorded along a 20 km route in Hannover, Germany with a Mobile Mapping System over a period of one year in bi-weekly measurements. The data set covers a variety of different urban objects and areas, weather conditions and seasons. Based on this data set, we show how scene and seasonal effects influence the measurement likelihood, and that multi-temporal maps lead to the best positioning results.


2020 ◽  
Vol 23 (6) ◽  
pp. 53-64
Author(s):  
G. N. Lebedev ◽  
V. B. Malygin

We consider the problem of collaborative decision making of the production process at airlines (CDM) in dynamically changing conditions of occurrence of emergency situations that make changes in the action plan. In the production process, due to the different orientation of the tasks to be solved, the solution may require a large or small number of possible variant solutions. The article presents a concrete example of such a situation affecting the conventional three services of the aviation complex, each with its own interests in the overall production process. The solution to this problem is the only option in favor of the overall production process. For this purpose, several designations and assumptions have been introduced, the list of which can be supplemented. Dynamic priorities are defined for each participant of the process. Optimization of collaborative decision-making can be achieved either by a simple search for solutions, or by using a genetic algorithm that allows you to get a suboptimal solution that meets the requirements of the participants in the process using a smaller number of iterations in real time. In this example, we consider a situation that occurs in a real enterprise due to bad weather conditions. Thus, dynamic priorities are assigned based on a multiplicative form for delayed flights, considering the interests of participants in the process, private criteria are formed for ranking flights at each step of rescheduling, and a genetic algorithm is applied. As a result, we obtained four solutions to the disruption caused by external factors. The first three options correspond to the interests of three parties concerned, and the fourth one is consolidated. All the solutions were different, which indicates the need for an objective and reasonable decision-making apparatus for joint management of the production process. The proposed mathematical apparatus has this ability and prospects for implementation.


2018 ◽  
Vol 7 (3.6) ◽  
pp. 294
Author(s):  
Shantanu Misra ◽  
Vedika Parvez ◽  
Tarush Singh ◽  
E Chitra

Vehicle collision leading to life threatening accidents is a common problem which is incrementing noticeably. This necessitated the need for Driver Assistance Systems (DAS) which helps drivers sense nearby obstacles and drive safely. However, it’s inefficiency in unfavorable weather conditions, overcrowded roads, and low signal penetration rates in India posed many challenges during it’s implementation. In this paper, we present a portable Driver Assistance System that uses augmented reality for it’s working. The headset model comprises of five systems working in conjugation in order to assist the driver. The pedestrian detection module, along with the driver alert system serves to assist the driver in focusing his attention to obstacles in his line of sight. Whereas, the speech recognition, gesture recognition and GPS navigation modules together prevent the driver from getting distracted while driving. In the process of serving these two root causes of accidents, a cost effective, portable and holistic driver assistance system has been developed.  


2001 ◽  
Vol 123 (4) ◽  
pp. 615-622 ◽  
Author(s):  
Kunsoo Huh ◽  
Jongchul Jung ◽  
Jeffrey L. Stein

Model-based monitoring systems based on state observer theory often have poor performance with respect to accuracy, bandwidth, reliability (false alarms), and robustness. The above limitations are closely related to the ill-conditioning factors such as transient characteristics due to unknown initial values and round-off errors, and steady-state accuracy due to plant perturbations and sensor bias. In this paper, by minimizing the effects of the ill-conditioning factors, a well-conditioned observer is proposed for the discrete-time systems. A performance index is determined to represent the quantitative effects of the ill-conditioning factors and two design methods are described for the well-conditioned observers. The estimation performance of the well-conditioned observers is verified in simulations where transient as well as steady-state error robustness to perturbations is shown to be better than or equal to Kalman filter performance depending on the nature of modeling errors. The estimation performance is also demonstrated on an experimental setup designed and built for this purpose.


Sign in / Sign up

Export Citation Format

Share Document