Autonomous Vehicles Coordination Through Voting-Based Decision-Making

Author(s):  
Miguel Teixeira ◽  
Pedro M. d’Orey ◽  
Zafeiris Kokkinogenis
2019 ◽  
Author(s):  
Weichao Wang ◽  
Quang A Nguyen ◽  
Paul Wai Hing Chung ◽  
Qinggang Meng

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1523
Author(s):  
Nikita Smirnov ◽  
Yuzhou Liu ◽  
Aso Validi ◽  
Walter Morales-Alvarez ◽  
Cristina Olaverri-Monreal

Autonomous vehicles are expected to display human-like behavior, at least to the extent that their decisions can be intuitively understood by other road users. If this is not the case, the coexistence of manual and autonomous vehicles in a mixed environment might affect road user interactions negatively and might jeopardize road safety. To this end, it is highly important to design algorithms that are capable of analyzing human decision-making processes and of reproducing them. In this context, lane-change maneuvers have been studied extensively. However, not all potential scenarios have been considered, since most works have focused on highway rather than urban scenarios. We contribute to the field of research by investigating a particular urban traffic scenario in which an autonomous vehicle needs to determine the level of cooperation of the vehicles in the adjacent lane in order to proceed with a lane change. To this end, we present a game theory-based decision-making model for lane changing in congested urban intersections. The model takes as input driving-related parameters related to vehicles in the intersection before they come to a complete stop. We validated the model by relying on the Co-AutoSim simulator. We compared the prediction model outcomes with actual participant decisions, i.e., whether they allowed the autonomous vehicle to drive in front of them. The results are promising, with the prediction accuracy being 100% in all of the cases in which the participants allowed the lane change and 83.3% in the other cases. The false predictions were due to delays in resuming driving after the traffic light turned green.


2020 ◽  
Vol 10 (4) ◽  
pp. 417-424
Author(s):  
Teng Liu ◽  
Bing Huang ◽  
Zejian Deng ◽  
Hong Wang ◽  
Xiaolin Tang ◽  
...  

IEEE Access ◽  
2016 ◽  
Vol 4 ◽  
pp. 9413-9420 ◽  
Author(s):  
Jianqiang Nie ◽  
Jian Zhang ◽  
Wanting Ding ◽  
Xia Wan ◽  
Xiaoxuan Chen ◽  
...  

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Darius-Aurel Frank ◽  
Polymeros Chrysochou ◽  
Panagiotis Mitkidis ◽  
Dan Ariely

Abstract The development of artificial intelligence has led researchers to study the ethical principles that should guide machine behavior. The challenge in building machine morality based on people’s moral decisions, however, is accounting for the biases in human moral decision-making. In seven studies, this paper investigates how people’s personal perspectives and decision-making modes affect their decisions in the moral dilemmas faced by autonomous vehicles. Moreover, it determines the variations in people’s moral decisions that can be attributed to the situational factors of the dilemmas. The reported studies demonstrate that people’s moral decisions, regardless of the presented dilemma, are biased by their decision-making mode and personal perspective. Under intuitive moral decisions, participants shift more towards a deontological doctrine by sacrificing the passenger instead of the pedestrian. In addition, once the personal perspective is made salient participants preserve the lives of that perspective, i.e. the passenger shifts towards sacrificing the pedestrian, and vice versa. These biases in people’s moral decisions underline the social challenge in the design of a universal moral code for autonomous vehicles. We discuss the implications of our findings and provide directions for future research.


Sign in / Sign up

Export Citation Format

Share Document