scholarly journals Cooperative Intersection with Misperception in Partially Connected and Automated Traffic

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5003
Author(s):  
Chenghao Li ◽  
Zhiqun Hu ◽  
Zhaoming Lu ◽  
Xiangming Wen

The emerging connected and automated vehicle (CAV) has the potential to improve traffic efficiency and safety. With the cooperation between vehicles and intersection, CAVs can adjust speed and form platoons to pass the intersection faster. However, perceptual errors may occur due to external conditions of vehicle sensors. Meanwhile, CAVs and conventional vehicles will coexist in the near future and imprecise perception needs to be tolerated in exchange for mobility. In this paper, we present a simulation model to capture the effect of vehicle perceptual error and time headway to the traffic performance at cooperative intersection, where the intelligent driver model (IDM) is extended by the Ornstein–Uhlenbeck process to describe the perceptual error dynamically. Then, we introduce the longitudinal control model to determine vehicle dynamics and role switching to form platoons and reduce frequent deceleration. Furthermore, to realize accurate perception and improve safety, we propose a data fusion scheme in which the Differential Global Positioning system (DGPS) data interpolates sensor data by the Kalman filter. Finally, a comprehensive study is presented on how the perceptual error and time headway affect crash, energy consumption as well as congestion at cooperative intersections in partially connected and automated traffic. The simulation results show the trade-off between the traffic efficiency and safety for which the number of accidents is reduced with larger vehicle intervals, but excessive time headway may result in low traffic efficiency and energy conversion. In addition, compared with an on-board sensor independently perception scheme, our proposed data fusion scheme improves the overall traffic flow, congestion time, and passenger comfort as well as energy efficiency under various CAV penetration rates.

Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1778 ◽  
Author(s):  
Juan Wu ◽  
Simon X. Yang

The bulk tobacco flue-curing process is followed by a bulk tobacco curing schedule, which is typically pre-set at the beginning and might be adjusted by the curer to accommodate the need for tobacco leaves during curing. In this study, the controlled parameters of a bulk tobacco curing schedule were presented, which is significant for the systematic modelling of an intelligent tobacco flue-curing process. To fully imitate the curer’s control of the bulk tobacco curing schedule, three types of sensors were applied, namely, a gas sensor, image sensor, and moisture sensor. Feature extraction methods were given forward to extract the odor, image, and moisture features of the tobacco leaves individually. Three multi-sensor data fusion schemes were applied, where a least squares support vector machines (LS-SVM) regression model and adaptive neuro-fuzzy inference system (ANFIS) decision model were used. Four experiments were conducted from July to September 2014, with a total of 603 measurement points, ensuring the results’ robustness and validness. The results demonstrate that a hybrid fusion scheme achieves a superior prediction performance with the coefficients of determination of the controlled parameters, reaching 0.9991, 0.9589, and 0.9479, respectively. The high prediction accuracy made the proposed hybrid fusion scheme a feasible, reliable, and effective method to intelligently control over the tobacco curing schedule.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 305
Author(s):  
Andres J. Barreto-Cubero ◽  
Alfonso Gómez-Espinosa ◽  
Jesús Arturo Escobedo Cabello ◽  
Enrique Cuan-Urquizo ◽  
Sergio R. Cruz-Ramírez

Mobile robots must be capable to obtain an accurate map of their surroundings to move within it. To detect different materials that might be undetectable to one sensor but not others it is necessary to construct at least a two-sensor fusion scheme. With this, it is possible to generate a 2D occupancy map in which glass obstacles are identified. An artificial neural network is used to fuse data from a tri-sensor (RealSense Stereo camera, 2D 360° LiDAR, and Ultrasonic Sensors) setup capable of detecting glass and other materials typically found in indoor environments that may or may not be visible to traditional 2D LiDAR sensors, hence the expression improved LiDAR. A preprocessing scheme is implemented to filter all the outliers, project a 3D pointcloud to a 2D plane and adjust distance data. With a Neural Network as a data fusion algorithm, we integrate all the information into a single, more accurate distance-to-obstacle reading to finally generate a 2D Occupancy Grid Map (OGM) that considers all sensors information. The Robotis Turtlebot3 Waffle Pi robot is used as the experimental platform to conduct experiments given the different fusion strategies. Test results show that with such a fusion algorithm, it is possible to detect glass and other obstacles with an estimated root-mean-square error (RMSE) of 3 cm with multiple fusion strategies.


Author(s):  
Geoffrey Ho ◽  
Erin Kim ◽  
Shahzaib Khattak ◽  
Stephanie Penta ◽  
Tharmarasa Ratnasingham ◽  
...  

2021 ◽  
Vol 70 ◽  
pp. 115-128
Author(s):  
Jie Li ◽  
Zhelong Wang ◽  
Sen Qiu ◽  
Hongyu Zhao ◽  
Jiaxin Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document